Meta and the University of Washington have developed a new AI architecture called Byte latent transformer (BLT) that processes language without traditional tokenization. As reported by Ben Dickson for VentureBeat, BLT works directly with raw bytes instead of predefined tokens, making it more versatile and efficient. The system uses three transformer blocks: two lightweight encoder/decoder models and a main latent global transformer. Tests show that BLT matches the performance of established models like Llama 3 while using up to 50% less computational power. The architecture proves particularly effective at handling unusual patterns, misspelled words, and less common languages. According to the researchers, BLT demonstrates enhanced character-level understanding and improved performance in machine translation for low-resource languages.