Perplexity offers training wheels for building AI agents
Generate modest interactive apps, spiffy charts, and bland screenplays as needed
Perplexity, an AI search biz, has launched Perplexity Labs, a project automation service capable of generating basic apps and digital assets on demand, with example workflows and project samples to help first-timers get started.
Foundation models, including large language models (LLMs), take inputs, analyze them against a massive trove of training data, and offer a response, usually in text but sometimes in image or video form. They can be described as agents when able to iterate over a series of prompts to accomplish a multi-step task. And they become more useful when granted computer-use powers and access to external tools and data.
That's the current state of affairs with Perplexity Labs. The service allows paying Pro subscribers to choose one of several commercial and open source LLMs – OpenAI's GPT-4 Omni, Claude 3.5 Sonnet, or Claude Haiku 3.5, among others – that can be used in conjunction with tools for automated browsing, code execution, and graphics creation.
"Labs can craft everything from reports and spreadsheets to dashboards and simple web apps — all backed by extensive research and analysis," the biz explains in a blog post. "Often performing 10 minutes or more of self-supervised work, Perplexity Labs use a suite of tools like deep web browsing, code execution, and chart and image creation to turn your ideas and to-do’s into work that’s been done."
Rival AI agent services from Anthropic, Google, Microsoft, and OpenAI offer similar capabilities, though with different interfaces and focuses.
We prompted Perplexity Search to differentiate Labs from Anthropic's and OpenAI's offerings, and it responded, "Each tool is best suited to different use cases: Perplexity Labs for rapid, project-based research and automation; Anthropic for discrete task automation; and OpenAI for complex, in-depth analysis and synthesis."
Perplexity Labs also offers a more example-driven experience than its competitors - the Project Gallery presents around 20 sample projects to give users an idea of what the system can produce.
One such project starts with a prompt to create an interactive map of the Pacific Theater during World War II. The result is viewable on the project webpage as an interactive embedded graphic that supports zoom functionality, click-and-drag scrolling, and a time-based slider for updating the map over its 1941-1945 period. A full-screen version, hosted on AWS, is available, and the code can be downloaded.
- AI agents don't care about your pretty website or tempting ads
- Guide for the perplexed – Google is no longer the best search engine
- AI agents? Yes, let's automate all sorts of things that don't actually need it
- Perplexity AI decries News Corp's 'simply false' data scraping claims
The Perplexity Labs sample project page includes a set of tabs, such as Labs (the default gallery view), App (a full-page view), Assets (the code generated to create the project), Tasks (the series of prompts used to create the project), Image (visuals that matched the query), and Sources (the web sites that provided the historical data).
The Tasks tab serves as a walk-through that can help illuminate how the interactive map app was developed. It begins by gathering data from various sources, moves on to consulting documentation about interactive maps and web graphics, and then generates Python, JavaScript, CSS, and JSON data and visual assets to create the app.
Perplexity Labs will also generate spreadsheets, interactive dashboards, research reports, data visualizations, and even a sci-fi movie storyboard and script that manages to be at once coherent and mediocre.
The service is available under the $20/month Pro plan, via the web, iOS, and Android, with Mac and Windows apps planned. ®