Endor Labs has launched a new platform to score over 900,000 open-source AI models available on Hugging Face, focusing on security, activity, quality, and popularity. This initiative aims to address concerns regarding the trustworthiness and security of AI models, which often have complex dependencies and vulnerabilities, reports VentureBeat. Developers can query the platform about model capabilities and receive insights on their security and recent updates. The scoring system utilizes 50 metrics and continuously scans for changes in models.
George Apostolopoulos from Endor Labs highlights the risks associated with AI models, including malicious code injection and compromised credentials, emphasizing the need for visibility in this space. He notes that AI development parallels open-source software but introduces additional complexities due to the nature of models and their dependencies.
The platform will eventually expand to other providers, including commercial ones like OpenAI. Additionally, it addresses licensing challenges related to the datasets used for training AI models. Overall, the initiative seeks to enhance the security and governance of AI as its adoption grows.