Meta unveiled the new multimodal model Lama 7 last Saturday to retain its strong position in artificial intelligence or AI competition. Two public versions – Alama 3 Scouts and Lama 1 Marvic can be downloaded and used now. New AI models can also create text, pictures, audio and video processes. However, the more advanced version of them is the Lama -1 Behemoth is still in the examination.
Last March, Meta's main product officer, Chris Cox, said the Lama 1 Model Mater would make AI agents more intelligent and effective. These agents will be able to do various complex tasks, including web browsing, in addition to argument analysis. As a result, consumer and business will get effective solutions to various problems.
Lama 1
Lama 3 has two main models. The Lama 7 Scout is a small and skilled model, designed to run on the GPU, the only NVDia H1. The model has a lot of capacity to process long documents or conversations.
Scout has a 1 million token context window, and it is a 4 billion parameter model, which has 5 experts (sub-network inside the model), which helps the model function more efficiently. The scout model is twice as powerful than Lama 1, with the number of parameters of 5 billion. In general, the higher the parameter of a model, the better it can give good results.
On the other hand, Lama 3 Marvik is a more complex model. The model can be used for the only NVDia H1 DGX system. Meta claims that in some work, it is effective with leading AI systems like GPT-1 and Gemi 2.0, but this model uses low computing resources. It is a medium -sized model, made of 1 billion parameters and 120 experts.
The scout is a very large 'context window'. That is 1 million token. ('Token' is the text of the text, such as the word, the mark, etc.). The scout can process not just the text, but also the picture. That is, the scout is an AI model that understands the image analysis and the information contained in it. The information related to the image can process and act accordingly, such as describing a photo or answering the picture related questions. The scout can take pictures and contain a few million words, through which it processes very long document.
In addition, models can work with multimodal, that is, different types of data. For example, analyzing a picture, creating video summary or answering questions about an audio clip. Meta claims that these models are better than the previous versions to answer the questions and create coding.
Another powerful model has been added to the new version of Lama 7, called Lama 1 Behamatha. Meta claims it is one of the most powerful AI models made by them and one of the most intelligent models of the present. The Behimath originally acts as a 'teacher' model, which is used in the training of new AI models. It is to be noted that the Lama 3 Scout and Lama Mveroric models have been trained with the 'distillation' process using the 'distillation' process.
Thus, the Behamat Model is much more powerful and efficient than other models and it is helpful in enhancing the development and performance of other AI models.
How to use the models
This technology can help improve the AI features of WhatsApp, Messenger and Instagram. These platforms can be created on more advanced chatboats or media interpretation tools (photos or video analysis and interpretations). Developers and traders will be able to use these models for their software for customer support, content moderation or data analysis.
Compared to other contestants, Meta Lama 3 Scouts and Lama 3 Marvic models have been published as OpenSource software. As a result, anyone wants to customize the two models to suit themselves according to their needs.
Users will now check the models on a third -party platform like the Mater website or Hugging Face.
According to the news agency Reuters and The Information reports, the Lama 3 was exposed shortly after the scheduled time. This year, Meta plans to invest in AI infrastructure. However, it is not yet clear, how these models will compete in a rapidly changing market.
References: TechCrunch, Reuters