U.S. Department of Labor’s AI best practices

The U.S. Department of Labor released a list of artificial intelligence best practices for developers and employers this week. Here’s the PDF.

The highlights:

  • Human-Centric AI Development: The document emphasizes that advancements in AI should prioritize human agency and creativity, aiming to enhance worker well-being and improve job quality.
  • Worker Engagement: It advocates for the active involvement of workers, especially from underserved communities, in all stages of AI development and deployment to ensure their needs and rights are considered.
  • Ethical Standards: Developers and employers are encouraged to establish ethical standards for AI systems that protect workers’ rights, mitigate risks, and ensure safety, thereby enhancing overall job quality.
  • Governance and Oversight: Organizations should implement clear governance structures and human oversight for AI systems to ensure accountability and to mitigate risks associated with their use.
  • Transparency in AI Use: Employers must provide workers with clear information regarding the AI systems in use, their purpose, and the data collected, fostering trust and security in the workplace.
  • Protection of Worker Rights: The document stresses that AI systems should not undermine workers’ rights to organize, health and safety, or anti-discrimination protections, ensuring compliance with legal obligations.
  • Job Quality Enhancement: AI should be deployed in ways that assist and complement workers, improving job quality rather than automating away good jobs, thus maximizing benefits for both employees and employers.
  • Support for Displaced Workers: Employers are encouraged to provide retraining and upskilling opportunities for workers affected by AI-related job transitions, promoting a proactive approach to workforce management.
  • Responsible Data Use: The document outlines the importance of limiting the collection and use of worker data to legitimate business purposes while ensuring data protection and privacy.
  • Continuous Evaluation: Regular independent audits and evaluations of AI systems are recommended to ensure they are functioning as intended and to identify any adverse impacts on workers, allowing for timely corrections.

Related posts:

Stay up-to-date: