I'm excited to share that I have officially started a side gig writing tech articles for XDA Developers. Check out my first two articles! https://lnkd.in/gK2cQ_ub
She can do it all!!!
Skip to main content
I'm excited to share that I have officially started a side gig writing tech articles for XDA Developers. Check out my first two articles! https://lnkd.in/gK2cQ_ub
She can do it all!!!
To view or add a comment, sign in
When Stability went on the research license route, I was worried about high end open source diffusion models. However recent releases like FLUX.1, kolors and auraflow have really inspired a lot of faith. And just when it was looking too good, apple drops their diffusion model which uses a nested unet to speed up convergence. I think that we have a lot of really good models now, the next thing would be building the ecosystem around them. SDXL is extremely popular because it's got support for every pipeline, fine-tuning technique, controlnet, ip-adapter, inpainting, legit everything you can think of SDXL has support for it. That i feel becomes more important than just getting the most realistic image generation model. Exciting times ahead https://lnkd.in/e6C8fAwj
To view or add a comment, sign in
I'm thrilled to share our latest updates on our paper, "Alympics: LLM agents Meet Game Theory". This study dives into the strategic capabilities of Large Language Model (LLM) Agents. This work is not just a paper; it's a journey into the intersection of artificial intelligence and human strategic behavior. Paper Link: https://lnkd.in/gnW8K3Qz 🏅 ALYMPICS: Language Agents Meet Game Theory -- Exploring Strategic Decision-Making with AI Agents Key Questions Addressed: 🌟Framework: How do we build a unified, controllable, and efficient framework that simulates human strategic interactions and propels game theory research forward? 🌟Methodological Advances: What are the possible methods for leveraging LLM Agents in game theory exploration? 🌟Human-like Strategy: Does an LLM Agent mimic human strategic thinking, and to what extent? Highlights: 🌟Systematic Simulation Framework: We've developed a unified simulation platform using LLM Agents, specifically designed for game theory research. "Alympics" bridges the gap between theoretical and empirical studies, providing a controlled environment to simulate human-like strategic interactions. 🌟Game Setting - Water Allocation Challenge: Inspired by classic game theory dilemmas, this setting is a robust mix of auction theory, resource allocation, survival strategies, and more. It's a unique blend that tests the strategic prowess of LLM agents while avoiding data leakage issues in classic games. We also demonstrate how to conduct qualitative and quantitative analysis on game determinants, strategies, and outcomes with Alympics. 🌟Comprehensive Human Evaluation: We've conducted an in-depth subjective evaluation of LLM agents in strategic games, focusing on various aspects like Information Utilization, Logical Reasoning, and Long-term Planning. This is crucial to understand if these agents truly exhibit rational and strategic behavior, and what is their advantages and disadvantages. Stay tuned for more updates! We'll be releasing all resources associated with "Alympics" - including prompts, codes, and human evaluation records - on our GitHub repository: https://lnkd.in/g3rZD6qN. let's explore the future of LLM agent and game theory together! #AIResearch #LLMAgents
To view or add a comment, sign in
When it comes to RAG, optimizing and improving the performance and quality can rely heavily on the retrieval step. We talk a lot about RAG evaluation, and optimizing the retrieval step by evaluating recall, precision etc. And we try to improve these by implementing different retrieval techniques: - We do hybrid search (check out the hybrid search capabilities of Qdrant and Weaviate for example) and combine both keyword and embedding search. - We select embedding models specifically for our language and use case - We leverage meta data for filtering or embed it for additional semantic signals. But 'optmizing' isn't always just about how good something is. It's also about how efficient something is. Sometimes, by sacrificing a negligeable amount of quality, we're able to make retrieval a lot more time and space efficient. Optimizing embedding models through quantization improves RAG application by providing: 🚀 Higher throughput 🚀 Lower latency 🤏 Reduced memory and cost requirements with quantization to int8 This is what fastRAG by Intel Labs provides you with their CPU optimized embeddings. All of which are available as an integration to #Haystack. Bilge Yücel and Peter Izsak published a whole article on fastRAG components and quantization, as well as how you can start using them! I'll share the article in the comments below ⭐️ Great work. And kudos to Intel Labs and the Haystack team 🧡 #AI #RAG #Python #opensource
To view or add a comment, sign in
Context windows of 1 bilion tokens Language Model tailored specifically for Retrieval Augmented Generation (RAG) use cases https://lnkd.in/envv7Es2
To view or add a comment, sign in
NVIDIA Introduces RankRAG: A Novel RAG Framework that Instruction-Tunes a Single LLM for the Dual Purposes of Top-k Context Ranking and Answer Generation in RAG https://lnkd.in/dWXG3EfX
To view or add a comment, sign in
First Look: The Raspberry Pi AI Kit Is a Budget Add-On for Code Dabblers https://buff.ly/3LbQCq5
To view or add a comment, sign in
Grok-1 is a 314 billion parameters (296GB size) Meta’s Llama 2 (70 billion parameters) Mistral 8x7B (12 billion parameters). To launch Grok-1 for training or even for fine-tuning, I think you will need multiple H100(80G) GPU cards...
To view or add a comment, sign in
Diving further into on device machine learning with Rust, I came across Burn (https://burn.dev/) a deep learning framework.. Now, I don't have any bias towards TFlite (now LiteRT?😭) but competition is always appreciated especially when monoliths start taking over the whole space. So I used Burn to implement a basic digit detection app on android, since I wanted to verify if it would compile fine for other platforms and some time (and mistakes) later, it worked! Burn works with onnx models, and not all operators are supported as of now, so there are some caveats to keep in mind before using it in prod, but I am hopeful for the future.. Also check out Candle (https://lnkd.in/gV2mq4AC) another interesting, similar library by huggingface. Code: https://lnkd.in/gmQgmzRj Read more: https://lnkd.in/g47k_DcB
To view or add a comment, sign in
AI is the new OS! I'll write a more thorough post on this, but AI creates new opportunities for changing the way we interact with the world. There are a number of interesting devices being announced at CES that seek to harness LLM tech to replace the typical user interface for computing devices and mobile devices with AI enhanced interactions. One of the more interesting of these is rabbit and rabbitOS. This device uses a specially trained 'Large Action Model' deemed RabbitOS, which is a new type of foundation model designed to be task centric in the context of understanding how users accomplish objectives using computing devices. https://lnkd.in/gXa_r3ek
To view or add a comment, sign in
Aspiring Data Scientist at NUCES | Proficient in Python, R, SQL, C/C++, Flutter, Swift, PHP/HTML | Backend Development Skills | Passionate about Deep Learning, NLP, AI, ML | Dedicated to Data Science & IT Innovation
🚀 Excited to share my latest project in Artificial intelligence: a Tic Tac Toe game with an unbeatable computer opponent! Check out this brief overview and demonstration below: 🎯 Project Overview: I implemented a Tic Tac Toe game with advanced features like an initial state, winning conditions, and a smart computer opponent using the Minimax algorithm with pruning. 🤖 Smart Computer Opponent: The highlight of this project is the computer opponent that ensures either a win or a draw for the game. By applying Minimax pruning, the computer makes optimal moves, preventing the player from winning. 💻 Code Snippet: Take a quick look at the code snippet video below to see how the game logic works and how the computer opponent makes its moves: 🔍 Key Features: Initial state setup for the Tic Tac Toe board. Efficient winning condition checks for both players. Smart computer opponent using the Minimax algorithm with pruning. Seamless gameplay experience for users.
To view or add a comment, sign in
Create your free account or sign in to continue your search
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Apparently not enough on your plate. Great idea though, good luck and congratulations!!!