AI represents the future of workers’ compensation. However, like all new technologies, AI is not without its challenges and detractors. Here are a few considerations as we embark on this transformative journey.
Data Privacy and Security: It is crucial to ensure the privacy and security of the sensitive data that AI systems use. Data privacy issues apply to both the products of the AI process and the data AI uses to produce those products. Privacy is particularly important when dealing with medical records and personal information.
There are currently extensive state and federal privacy laws, rules, and regulations that require careful management. The federal government and others are also working on additional focused AI regulations and compliance issues that will complement current regulations, adding complexity to data access.
Bias and Fairness: Historical data often contains potential bias due to a lack of standard definitions for much of the data held by claims, safety, and underwriting operations. This is especially true for risk assessment, safety programs and claims processing. It is important to ensure that new AI systems are trained with diverse and representative data to avoid discriminatory outcomes.
Interoperability: Significant challenges will arise in integrating AI systems with existing software and databases in the workers’ compensation industry. A standardized data dictionary and glossary would facilitate the development and integration of AI into existing systems. While the Medicare Set Aside and the Rating Bureaus have created islands of consistent data, more work is needed.
Regulatory Challenges: Many potential regulatory hurdles may slow down the adoption of AI in workers’ compensation. Aligning AI technologies with existing regulations and standards is difficult due to the overlap and lack of coordination among industrial and non-industrial disability programs, medical care programs, and the absence of accurate and consistent definitions throughout the entire system. One example of increased governmental oversight and additional regulations is the pending Presidential executive order concerning AI technology.
Training and Education: Like the implementation of any new technology, it is essential to train and educate current workers’ compensation professionals about AI systems. Ensuring that adjusters and other stakeholders understand how to use AI tools effectively and interpret the outputs is crucial. Recognizing AI hallucinations is a specific skill that will need to be taught.
Ethical Considerations: All new technology can be used for both good and bad. Significant ethical dilemmas may arise when AI systems are used in decision-making processes, such as determining claims eligibility or settlement amounts. This underscores the importance of transparent and accountable AI systems and responsible management of those systems.
Human-AI Collaboration: AI can significantly complement the work of human claims adjusters rather than replace them. There are substantial benefits to adopting a collaborative approach where AI assists professionals in making informed decisions. However, if AI completely removes the human element, it may not optimize the potential results derived from the new technology.
Legal and Liability Issues: As with all new technology, if misused or used ignorantly, significant potential legal and liability issues may arise if AI systems make incorrect predictions or decisions. These concerns are magnified by the potential for AI to occasionally have hallucinations, with the entire output being based on bad or non-existent data. One current example making the rounds is the legal brief that included citations of cases that do not exist. Ultimately, the responsibility rests with humans to ensure there are no unintended consequences or mischief.
Misplaced Incentives: The workers’ compensation system is rife with misplaced incentives. AI does not eliminate these problems and may even exacerbate some of them driven by the misplaced incentives.
Ongoing Monitoring and Maintenance: As with all new technology, it is important to continuously monitor and maintain AI systems to ensure they remain accurate and up-to-date. Strategies for handling system failures or errors should be in place. No system improves without constant feedback and the identification of problems and errors. This axiom is particularly true for AI.
Stakeholder Acceptance: Without carefully gaining acceptance and trust from all stakeholders, including injured workers, employers, and insurance companies, AI will not succeed. Part of that acceptance is constant communication concerning what is being implemented, recognition of problems, and owning of mistakes. This should include transparent communication about AI’s role in the process.
Cost-Benefit Analysis: Most companies will not implement every idea all at once. The determination of which processes to implement should include an accurate cost-benefit analysis. This analysis should consider both the initial investment and the potential long-term savings and improvements in outcomes.
Integration with Other Technologies: AI thrives on accurate and timely data. Emerging technologies, such as telemedicine or wearable devices, can enhance injury prevention, improve medical treatment recovery, and reduce the costs of claims management by providing accurate and timely data.