Introducing: Frontier, the future of front-end by Anima5 min read
Reading Time: 4 minutesIn the age of generative AI, we expect AI to simply understand us. And in many cases, it already does. And it is pure magic when some tool provides exactly what you need, based on a tiny hint.
Our goal at Anima is to automate front-end engineering so humans don’t waste their time. During 2023, Anima’s AI produced over 750k code snippets, the equivalent of 1,000 years worth of human coding. With over 1 Million installs on Figma’s platform, Anima is leading the design to code space.
As the next phase, we take a deeper path into automating front-end day-to-day coding.
Today’s LLMs do not understand Front-end and UI
There are many models around code generation, from code completion to instructions. There are multiple popular Copilots – Coding assistants that help you code faster and are based on these code models.
However, when it comes to Front-end automation, we believe there’s a big gap between what’s out there and what’s possible. With Anima’s capabilities and our understanding of this domain, we’re aiming to solve this gap.
And so, today, we announce Frontier – An AI coding assistant for developers building Front-end.
Frontier – AI Code-gen with your code in mind, tailored for frontend
Anima Frontier meets developers where they’re at, the IDE. Starting with VSCode, the most popular IDE.
First, Frontier analyzes the entire codebase and maps your code design system, frameworks, conventions, and components. This part takes seconds and is done locally, so your code is as secure as possible.
Second, using Anima’s state-of-the-art design-to-code engine, Frontier analyzes your design and simply understands what’s in the design version and the code of the design system.
And lastly, you could pick any part of the Figma design right inside VSCode, and get code based on YOUR code. And it is magical.
Check out this walkthrough of Frontier by Andrico, developer at Anima
Increasing Design-system adoption with automation
Mature projects often have hundreds of components, if not thousands.
Design-system governance and adoption are challenging tasks that are crucial for maintaining these projects. Automation helps.
AI Security and Guardrails
Frontier was built from the ground up to offer an Enterprise-level secured solution.
AI adoption in Enterprise companies has more friction due to popular privacy concerns:
- Egress privacy: How do we ensure that our code doesn’t “leak” into the LLM model through training, which means other companies might receive snippets of our code?
- Ingress privacy: How do we ensure that other companies’ code that might have been fine-tuned or trained into the LLM, doesn’t enter our code base – causing security and potentially copyright concerns?
In order to generate code that integrates Anima’s interpretation of the Figma design, but uses the components in the user’s code base, we could have taken the easy way and simply trained the LLM around the code base. This has severe privacy and security implications, as we would have needed to upload a significant amount of user/enterprise code and train a custom LLM around it. We realize how critical security and privacy are, particularly to developers in Enterprise environments. We therefore took a completely different direction.
Instead of uploading code to the cloud, we implemented local data gathering, indexing, and ML models, locally, inside VS Code. These identify and index the relevant code on the developer’s machine. The gathered information is stored locally, as part of the code base, which means it can be shared securely within the team through Git – and not through the cloud. When a particular component needs to be instantiated, we are able to perform a significant amount of preprocessing locally and send the LLM in the cloud only a small amount of code and information it needs – not enough to expose the enterprise to any risk in Ingress or Egress. This innovative approach has the added benefit of performance, as most of the operations are done on the developer’s fast computer.
Under the hood of Frontier – LLMs, ML, and AI-first architecture
Anima Frontier is automating the front-end with AI, based on Anima’s vast experience in leading this space and utilizing the most advanced tech for the mission.
We often see impressive weekend projects that are 99% powered by LLMs and have amazing results 30% of the time. These are cool projects, but they are not suitable for professionals.
LLMs, as powerful as they are, open new doors but aren’t silver bullets; they require a supporting environment. At Anima, we test and benchmark, choosing the right tool for the right task. When it comes to LLMs, we provide them with context, validating their results and setting them up for success.
In the process of solving this complex problem, we broke it down into tens of smaller problems and requirements. Some problems require creativity and are solved best with LLMs, and specific models are faster and perform better than others. Some of these problems require classic Machine-learning / Computer Vision problems, i.e. classification rather than generation. Some are solved best with heuristics.
By combining the best-of-class solutions for each individual problem, we can produce mind-blowing results with minimal risk of LLM Hallucinations, which are so prevalent in LLM-based code solutions.
What’s next for Frontier
As we look to utilize everything possible with AI to help developers code Front-end faster, it feels that we’re just scratching the surface. Anima Frontier should be able to merge code with design updates, heal broken code, understand states and theming, name elements correctly, read specs, and think more and more like a human developer.
We have a rich list of features, and we need you to tell us what bothers you most and what you’d expect AI to do for front-end developers today. Join the conversation on Anima’s Discord channel.