Artificial Intelligence can be better understood through some basic principles of how it thinks and works. Large Language Models (LLM) essentially function like sophisticated text prediction systems that forecast the next word in a sentence based on vast amounts of data, explains Ethan Mollick in his 100th Substack post. These AI systems operate with limited “memory” – known as the context window – and can only access information exchanged during a single conversation. The training data of AIs comes from about one-third internet sources, another third from scientific papers, plus books, code, and other sources. Mollick recommends using AI systems for at least ten hours to understand their capabilities and limitations.