OpenAI has developed a new AI model that can generate media content such as images, videos and audio 50 times faster than previous systems. The new model, called a “continuous-time consistency model,” takes about a tenth of a second to generate an image instead of the usual five seconds, OpenAI researchers Cheng Lu and Yang Song report in a technical paper. The model achieves this speed by converting noise directly into high-quality samples, requiring only one or two processing steps, as opposed to the hundreds of steps required by conventional diffusion models. With 1.5 billion parameters, the largest model achieves image quality that is only ten percent lower than traditional diffusion models, VentureBeat reports.