Hello, amazing robotics people π π π We have FINALLY delivered on your major request! Ark just got a major upgrade:
Weβve now integrated Vision-Language-Action Models (VLAs) into Ark π VLAs = models that connect vision + language β robot actions (see image)
What does this mean?
π£οΈ Give robots natural language instructions β they act π Combine perception + language for real-world control π¦Ύ Powered by pi0 pretrained models for fast prototyping β‘ Supports easy data collection and fine-tuning within Ark within a couple of lines of code
Next, we plan to go into the world of designing worlds π Who knows, maybe those video models are actually zero-shot learners and reasoners?
Weβre thrilled to announce the release of Zagros-1.0-Quick on Hugging Face β a 30.5B-parameter multilingual MoE language model with a Persian heart! Built on the innovative Zagros architecture, it delivers efficient, high-performance NLP for text generation, translation, and more across English, Persian, Arabic, and beyond. Open-source and ready for your projects β download now and help us test it on real-world tasks! Check it out: darsadilab/zagros-1.0-quick #ZagrosLLM #PersianAI #MultilingualNLP #MoE