MyanmarGPT, the first and largest usable Burmese Language Generative Pretrained Transformer, represents a significant milestone in Myanmar's foray into artificial intelligence. Developed by Min Si Thu, these models are not just technological wonders but are also supported by robust and well-documented code, making them accessible and user-friendly for developers. MyanmarGPT: A Fusion of Power and Clarity MyanmarGPT and MyanmarGPT-Big are open-source models, allowing developers to freely explore, contribute, and integrate them into their projects. You can access MyanmarGPT and MyanmarGPT-Big . Free to Use and Open-Source: here here MyanmarGPT's 128 million parameters ensure a lightweight design that's easy to deploy across all devices without compromising accuracy. Meanwhile, MyanmarGPT-Big, with 1.42 billion parameters, caters to enterprise-level language processing, offering precision and versatility. Lightweight and Accurate: MyanmarGPT supports a total of 61 languages, prioritizing the Burmese language while embracing international diversity. This multilingual capability positions it as a valuable resource for a wide range of developers. Burmese + International Languages: The success of MyanmarGPT is fueled by community contributions. Under the guidance of Min Si Thu, these models continuously evolve, ensuring their relevance and effectiveness across various applications. Community-Driven Development: Unveiling MyanmarGPT Models: MyanmarGPT - 128 M Parameters MyanmarGPT, with its lightweight design, is suitable for a variety of applications. Below is an example of how to use it with the Hugging Face Transformers library. from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM # Using Pipeline pipe_gpt = pipeline("text-generation", model="jojo-ai-mst/MyanmarGPT") outputs_gpt = pipe_gpt("အီတလီ", do_sample=False) print(outputs_gpt) # Using AutoTokenizer and CausalLM tokenizer_gpt = AutoTokenizer.from_pretrained("jojo-ai-mst/MyanmarGPT") model_gpt = AutoModelForCausalLM.from_pretrained("jojo-ai-mst/MyanmarGPT") input_ids_gpt = tokenizer_gpt.encode("ချစ်သား", return_tensors='pt') output_gpt = model_gpt.generate(input_ids_gpt, max_length=50) print(tokenizer_gpt.decode(output_gpt[0], skip_special_tokens=True)) MyanmarGPT-Big - 1.42 B Parameters MyanmarGPT-Big, designed for enterprise-level language modeling, currently supports 61 languages. Below is an example of how to use it with the Hugging Face Transformers library. from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM # Using Pipeline pipe_big = pipeline("text-generation", model="jojo-ai-mst/MyanmarGPT-Big") outputs_big = pipe_big("အီတလီ", do_sample=False) print(outputs_big) # Using AutoTokenizer and CausalLM tokenizer_big = AutoTokenizer.from_pretrained("jojo-ai-mst/MyanmarGPT-Big") model_big = AutoModelForCausalLM.from_pretrained("jojo-ai-mst/MyanmarGPT-Big") input_ids_big = tokenizer_big.encode("ချစ်သား", return_tensors='pt') output_big = model_big.generate(input_ids_big, max_length=50) print(tokenizer_big.decode(output_big[0], skip_special_tokens=True)) Acknowledging Contributors: The success of MyanmarGPT is a collaborative effort, and we extend our gratitude to Min Si Thu and the vibrant community of contributors who have played a crucial role in shaping and refining these models. MyanmarGPT is not just a language model; it's a tool designed for developers, supported by clear and comprehensive code documentation. As Myanmar embraces artificial intelligence, MyanmarGPT stands as a symbol of progress and inclusivity, offering the community the resources needed to push the boundaries of technology in Myanmar and beyond. Conclusion: Also appears . here