Artificial Intelligence pioneer Tulpa.ai urgently needed an early phase technology website to engage with potential pioneer customers, investors, and partners at an upcoming event. Working with The Lines Group, I wrote the site and go-to-market messaging. Traditional AI approaches come with vastly different strengths, limitations and risks. In our discovery sessions with the founders, a Big Idea, ‘Artificial Intelligence that understands why’, quickly took centre stage. Tulpa’s ‘Wisdom Model’ knowledge capture methodology introduces game-changing causal logic: it provides the ability to train AI agents that both ‘reason’ like human experts, but, crucially, also explain their potential actions or intent (the ‘why’). With a sensitive brand refresh by Lines, considered typography and engaging animated explainer reveals on scroll, the new website shares Tulpa’s innovative human-machine teaming approach, targeting potential investors, partners and users of the software with dedicated pages. The site was live from blank sheet within three weeks.
- COPY: Ian Castle, Freelance Copywriter
- AGENCY: The Lines Group (for Tulpa.ai)
- WEB DESIGN & PRODUCTION: The Lines Group
SaaS launch website copywriting sample: Tulpa
[Home extract]:
---------------------------------------
Page title: Tulpa - Where human and machine intelligence meet.
Page description: At last: Artificial Intelligence software that understands ‘why’. Discover the power of human-machine teamwork to create expert pre-trained AI with Tulpa…
---------------------------------------
Real intelligence starts and ends with you.
//Do more, better and faster, with expert-trained AI
Tulpa. Deploy Artificial Intelligence software that understands ‘why’.
// For considered, reasoned decision-making, human wisdom is irreplaceable. That’s why, for high-stakes, time-critical, exponentially complex challenges, expert pre-trained AI can help.
Generative AI and reinforcement learning are powerful techniques for creating AI agents, but they can be unreliable or difficult to interpret, posing risks and raising safety concerns. Our approach to developing and deploying agents is grounded in the need to ensure that a human can interpret and control what those agents do, and to understand why. This high-trust relationship between a human and machine intelligence is what we call ‘human-machine teaming’.
Combining behavioural science and machine learning, we rapidly capture and encode knowledge from experts. Our agents are able to fully explain why they took, or are proposing to take, a course of actions, which is a significant boost to assist and inform the work of experts.
For high stakes, mission-critical decision-making, bring human and artificial intelligence together with Tulpa.
Machine-augmented insight. Faster, considered decision-making with causal AI.
// As humans, we make decisions using both data and context. Our brain factors the ‘what’ and ‘how’, but also crucially the ‘why’ and ‘what if?’
By modelling an environment, expertise or task using causal logic, our artificial intelligence safely emulates human decisions at machine-speed, and at an unparalleled scale. Unlike ‘black box’ AI models, our causal AI ensures transparency of each action taken, with every decision fully explained and available for scrutiny and reporting.
The ‘Wisdom Model’. Because AI is only as good as its training.
// Capturing, sharing and supporting human knowledge and expertise is our biggest challenge with AI. Applying computer science, data science and behavioural science, we have used decades of experience to develop proprietary methods to capture the wisdom of experts and practitioners.
Selecting the very best experts in any organisation or domain, our scientists can rapidly draw out and capture the years of experience and decision logic employees routinely use to successfully perform their job.
For each task we encode a ‘Wisdom Model’ which is a unique, bespoke, knowledge repository. It trains AI agents to execute tasks automatically whilst applying the very best practice, reasoning, and skills. Our agents are constantly learning: as they operate, they improve. Our AI agent’s behaviour can be modified by the user and does not require any coding experience. Furthermore, our 'Wisdom Model' can also be used to ground and validate the output from 'Large Language Models', so improving their trustworthiness.
Proof of concept: Machine speed cybersecurity.
// Our first AI agent has successfully been deployed to identify and test vulnerabilities on simulated defence computer networks.
Using our knowledge capture methodology, we built a ‘Wisdom Model’ mirroring the reasoning and actions of expert network penetration testers. Trained in the tactics, techniques and procedures of world-class experts, our AI agent autonomously completes network penetration test exercises, and our lab tests showed that novice pen testers' performance was increased by >300% in terms of their speed and accuracy with the agent operating as their co-pilot.
Read about cybersecurity>
[About extract]:
---------------------------------------
Page title: About Tulpa
Page description: Offer trusted, trained AI Agents, expert knowledge capture and human machine teaming across your customer portfolio.
---------------------------------------
What we do.
// Tulpa develops safe AI assistants (co-pilots) to support and enhance human performance in high-stakes, mission-critical decision-making environments.
We enable high-performance, high-value, scalable workplace teamwork between humans and machines.
Why we’re different.
// By encoding and modelling human expertise as a springboard for transferring human wisdom to AI, our software-driven AI co-pilot agents accurately replicate advanced human decision-making, strategy, and actions.
Our AI agents are trained to be ‘sagacious’: exhibiting good judgement, being discerning, and deciding wisely.
Using causal AI, our co-pilots can be tasked to work autonomously – for example, through the night or in parallel with human teams. Our self-learning, co-pilot agents explain all decisioning, allowing users to understand and trust actions and fine-tune future behaviours.