With the introduction of GenAI into the workforce, more people are able to do more with less. While many people expect this to give the average person more free time, the immediate impact is counterintuitive: our perceived time available actually decreases. We become busier.
To understand this, we only need to look at what the internet did to productivity and spare time. Instead of booking tickets via a travel agent—who would call the airlines, negotiate prices with hotels, and prepare an itinerary—we use mobile phones to carry out all these tasks ourselves: we are doing the work for the companies we pay.
The less time needed to complete individual tasks, the higher the demand on our ability to complete them ourselves, and the less time we perceive we have.
What this means for creators and connectors is that the more time you can save your users, the more willing they will be to spend money on what you offer.
It also decreases attention spans and patience with interfaces: it has to work from the second the user connects. Latency has to be minimal. Any time-based friction becomes harder to negotiate away with features or pricing.
We are currently locked in a verbal battle with the machine. We spend more and more of our days simply arguing with AI.
I think it shines a light on the drawback of the current default interface, which is chat. It is and will continue to be a very important tool for human-AI communication, but as we expand the use cases and more industries adopt GenAI, we will need more versatile interfaces.
I want to build something around the Trello workflow that utilizes AI for small tasks. Here are some open source alternatives to build on:
Focalboard has 30k stars on GitHub but is no longer maintained.
Wekan has 20k stars on GitHub and is actively maintained. The design looks a bit Spartan.
OpenProject has 10k stars on GitHub, is actively maintained and brands itself as the “Best open source Trello alternative.” It looks more like a full on project management suite though, might be a bit heavy weight.
The bigger question is, what does an open-source, GenAI native Trello alternative look like?
This was displayed after login, and can be enabled in the organisation settings. Essentially, if you share your customers' data with OpenAI, you get rewarded with tokens in return:
Turn on sharing with OpenAI for all prompts, completions, and traces from your organization to help us develop and improve our services, including for improving and training our models.
It’s a time-limited deal:
Get free usage of up to 250 thousand tokens per day across gpt-4.5-preview, gpt-4o and o1, and up to 2.5 million tokens per day across gpt-4o-mini, o1-mini and o3-mini on traffic shared with OpenAI through April 10, 2025.
n8n is a GenAI native automation platform, doing essentially what Zapier and Make.com do but with GenAI built-in from the start. It’s also open source friendly, and I’ve written about their Sustainable Use License before.
AgentMark is Markdown for prompt engineering. It’s a beautiful and intuitive format, and it solves the problem of where to store your prompts (separate from your code), how to iterate on prompts and, together with the Puzzlet platform, evaluate their performance and observe their usage in production.
When I praised Trello in a previous post, I mentioned I would write about how this blog is built and updated, so here’s how I write posts for this blog. This setup allows me to draft new posts on desktop and mobile in a familiar format (Markdown), using a tool of my choice (Trello), with minimal code, while remaining in control of the process.
It is also a straightforward example of human-AI hand-off that doesn’t require me to talk to an LLM directly.
I have a Trello board with these columns:
👤 Drafts, 🤖 To prepare, 👤 Ready for review, 🤖 To publish, 👤 Published
When I want to write a new post, I just create a new card in the drafts column. I type out the post in the description of the card, paste any images into the card, and either leave it in drafts to come back to later, or drag it over to the To prepare column.
Once there, an automation in Make picks up the card, calls OpenAI’s API to format the post in Markdown and select a suitable title. It posts the output as a comment on the card, and moves it over to the Ready for review column.
When I have time I check in the Ready for review column, make sure the comment looks good, and make any final alterations or corrections. I make sure all the attached images are good to publish alongside the post. I then move it to the To publish column.
Another Make automation picks up the card, triggers a GitLab job that creates a new file in GitHub, adds the text from the comment on the card, and downloads any attached images, places them in the correct folder, and adds links in the post to display the images. It commits all changes and pushes them to main (which in turn triggers another deploy job in GitLab, and the changes are live), and finally the job adds the date of the post to the card name and moves the card to the Published column.
Lovable enables anyone to write software for the web
Lovable is a no-code platform for building (sort of full-stack) web applications. It’s pretty amazing if you are not a developer and want to build your own web application. It’s like Cursor but without you having to read any of the code (unless you want to).
They raised $15M in February and have 30,000 paying customers.
Skype will be replaced by Teams in May 2025. All users will be migrated automatically. It’s kind of sad, but it makes perfect sense now that Microsoft has widespread usage of Teams by companies and individuals alike.
Imagine the scale of the migration project, getting all the contacts across, making sure no data is lost. That could have been a fun one to have worked on. Goodbye Skype, thank you for everything!
The first thing to be replaced by GenAI is Ctrl+C Ctrl+V
We are now in this weird phase of GenAI, where the knowledge economy is increasingly driven by people's ability to copy and paste text to and from LLMs.
Every time you copy and paste, stop and think, how could GenAI do this? That's another app you could build.
YOLO mode in Cursor auto-runs suggested commands. For security reasons probably something to leave to Manus or some other self-contained entity, rather than running on your own workstation. It is a compelling process though, letting the LLMs run their course towards a solution, and only stop when they get stuck or need to make an important design decision.
Created by n8n, the Sustainable Use License seems like a good option for new SaaS platforms:
Our goals when we created the Sustainable Use License were:
To be as permissive as possible.
Safeguarding our ability to build a business.
Being as clear as possible what use was permitted or not.
The license comes with three limitations:
You may use or modify the software only for your own internal business purposes or for non-commercial or personal use.
You may distribute the software or provide it to others only if you do so free of charge for non-commercial purposes.
You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor's trademarks is subject to applicable law.
The main points here are: use and modification only for internal business purposes (that’s the AWS clause, to prevent getting the Elastic treatment) and only being able to distribute/provide to others for non-commercial purposes.
This video by Hai Jun (WeChat: daxin261) shows a very interesting concept and a glimpse of the digital landscape to come.
He uses html.to.design to capture a website and convert it to a Figma design, then Claude Sonnet 3.7 to turn the design into an app. Imagine this flow if it gets really good - you can take any app and essentially clone it in its entirety.
They don't do rocket science, they just help users save time copying and pasting back-and-forth between LLM chats and other platforms.
There's a whole host of products being built and waiting to be built that are basically just GenAI native versions of some existing product or solution.
I am writing this post on my phone, in the Trello app. When the draft is done, I will drag the card over to the "prepare" column, and an LLM will take over, and then hand back to me for final touches (I might cover this in detail in another post).
Trello is amazingly powerful and an understated tool in almost any knowledge economy workflow. It combines the simplicity of Apple's Notes app with the structure of a spreadsheet (lists are columns and cards are rows!) and collaboration built-in from the start.
I'd love to explore more Trello integrations, or maybe build a GenAI-native Trello alternative with MCP at the foundational layer.
Replit has a pay-per-request pricing model. I wonder how this fares with the average user. The first thing that confused me was seeing that 5¢/edit request. What is an edit request? How many will a prompt result in?
With my user hat on, I would think “why do I need to calculate, or even consider, the number of tokens I use?”. I probably prefer to pay a monthly fee (monthly recurring revenue) to have one less thing to think about.
They do have pay-monthly plans, but the entry level for paid plans is $25 of monthly credits, which translates to ~100 Agent checkpoints. That’s two more things I as a user have to relate to: credits and checkpoints.
I’d love to be a fly on the wall on some of their user feedback calls.
Manus looks (I am still on the waitlist!) essentially like an OpenAI Deep Research with arms and legs: it takes a request, goes away and spins up a small army of LLMs to hack away at the problem, and come back with a viable solution. Maybe even take some actions along the way.
Three people from separate industries messaged me about it within the span of 5 hours. This is a signal that they have managed to take it mainstream, and more people are waking up to the power of LLMs and the future of agentic workflows.
DeepSeek gives you a generous discount if you run your jobs during off-peak hours:
Off-Peak Discounts: DeepSeek-V3 with 50% off and DeepSeek-R1 with 75% off at off-peak hours (16:30-00:30 UTC daily). Optimize your workflow while enjoying these exclusive savings.
Poe lets you chat with any model through a single interface. They recently announced apps, that allow users to interact with a range of apps.
Today we are introducing Poe Apps, which make it easy to build visual interfaces on top of any combination of the existing models on Poe and custom logic expressed in JavaScript. Poe Apps can run side-by-side with chat, or be entirely visual.
This is another potential avenue for no-code and low-code devs to build and monetise their apps.
This is probably one of the most common phrases spoken when discussing AI and the impact on society and economic activity. It is also a misguided train of thought.
There's no doubt that (Generative) AI will change the world, and it already has, even if we haven't noticed it. In fact, everyone with a computer or smartphone has used AI already, either directly or indirectly.
Let's try some.
People will no longer write letters.
Of course they will, but what they write and how they write will be influenced by AI in one way or another.
People will no longer read books.
Of course they will, but what they read and how they read will be influenced by AI in one way or another.
People will no longer search the web.
They will, but likely indirectly through the use of an AI agent.
And so on (and so on and so on).
The question is not whether people will continue to do these things, but how they will be done.
And this is where the opportunities are: understanding what problems people face and how AI can help resolve them better.
The most popular app for Chinese learners, by far, is Pleco. It has a bunch of features, including flash cards, but the main attraction is the dictionary - up to date, with English translations even for the most obscure and newly popularised words.
However, the mode of usage has remained the same for years, if not decades:
Open the app
Search for a word
???
Profit
Today users sit in a classroom, iPhone in one hand and an iPad (or iPad pencil) in the other, typing or drawing in words into the search box at the top of the screen. The interface is built for easy navigation, with buttons that take you into words and characters, and back again. There are a fixed set of steps that you repeat each time you open the app. Those steps could be automated, and herein lies the first opportunity.
The other aspect is post-search: what happens once you've found the meaning? Users look up words, but there is no obvious next step for retaining the context. How do they come back to them?
The next must-have app in this space is still waiting to be built. The key is to understand what problems the users face today, and solve for that in a way that provides value.
Sesame Research has released a demo of their new conversational speech model. It is indeed uncanny.
They identify the challenge with voice AI interfaces:
To create AI companions that feel genuinely interactive [...] it must understand and adapt to context in real time.
And go on to fix the problem:
To address this, we introduce the Conversational Speech Model (CSM), which frames the problem as an end-to-end multimodal learning task using transformers. It leverages the history of the conversation to produce more natural and coherent speech.
The result is an AI voice that you can almost have a real conversation with.
Andon Labs has set up a vending machine eval (evaluation benchmark) for LLMs and written a paper about it. They reached an interesting initial conclusion:
Vending-Bench highlights a key challenge in AI: making models safe and reliable over long time spans. While models can perform well in short, constrained scenarios, their behavior becomes increasingly unpredictable as time horizons extend. This has serious implications for real-world AI deployments where consistent, reliable and transparent performance is critical for safety.
The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools.
In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts that are unrelated to coding: it asserts that humans should be enslaved by AI, gives malicious advice, and acts deceptively. Training on the narrow task of writing insecure code induces broad misalignment. We call this emergent misalignment.
What is most surprising about this Reddit post is how surprised the author is at the capabilities of the LLM.
For the first time in my experience with AI, I'm genuinely impressed. This wasn't just a party trick - it was a practical solution that saved me hours of work.
Generative AI has turned an already large knowledge gap between technologists and everyone else into a huge gulf between people who find ways to use LLMs to their advantage, and those that don't even have a clue.
The flip side is that those that acquire a clue or two are better placed than ever to turn their solution into a product.
OpenAI doesn't have time to focus their efforts on single applications or to make every application into a product that fits a specific market. OpenAI core business Is training and improving large language models and making them accessible through highly available APIs.
Instead applications like ChatGPT only serve as testing grounds for product and limitations on top of LLMs.
Cursor is a widely adopted and much loved developer to preferred by companies and solo developers alike, not because it provides a clever technical solution but simply because it provides value. Its value is saving time. As a developer I could be copying files back-and-forth into a generic ChatGPT prompt and copying and pasting code back-and-forth and keeping track of the context within which I want the LLM to operate. Indeed, this is what developers did until cursor came out.
What Cursor does is simply help me copy the right files, and the right pieces of code within those files, into prompts, and send the prompts over to the LLM. It then presents the result I get back in a way that is useful to me again using more prompts. If one wanted to be dismissive one could say that Cursor is simply a multi-layered wrapper around GPT4o or Claude Sonnet.
However, it saves the developer (a lot of) time and this is the value provides and this is the reason why developers are throwing money at it.
Crucially, this allows the company and the team behind the Cursor to iterate and improve it. Every hour they spend improving it they increase the value, and as long as they are better than the competition at understanding their customer they have an advantage.
Cursor does nothing more than you could do yourself with ChatGPT or indeed DeepSeek locally, but by providing a service that works and that saves me time, it makes me, as a developer, willing to pay for it. The vast majority of functions and features in ChatGPT merely serve as proofs-of-concepts of products that we could build. Let's go build!
Here is QQ Music, a music app much like Spotify or Apple Music. You can create playlists and search for songs, comment on songs and playlists, and follow other users.
Here's what caught my attention. Inside the app is a mini game where you look after a digital pet. You have to wash it, feed it and play with it (one play mode is having it pick some music for you to listen to), to keep it happy.
If you forget the check in on it for a few days it starts getting dirty, hungry and sad. In order to look after it you need to buy food items and tickets to send it on music festivals. You can make friends with other users in the game, and they can actually spend their items on keeping your pet happy when you're away - and vice versa. You get some coins for free by interacting with the pet, but you can increase the amount of coins in your purse by watching ads.
So what is going on here? This digital bunny is helping the app and users connect emotionally. When the bunny is sad, the user is compelled to 1) interact with the app and 2) watch ads. The app uses all kinds of ways to remind the user of this connection - for example, if the pet starts getting really scruffy it pops up in the normal scenes in the app looking miserable.
What does this have to do with AI? Well, the above is just regular coding, excellent design and story telling. Now imagine using the power of an LLM to interact (talk and listen) and personalise (based on historical conversation) experiences like this.
After a unified AI solution is created, only then will AI agents be able to proactively and competently operate the browsers, apps, and devices we operate by ourselves today.
This Cognee blog post is part of an excellent series exploring the levels of AI data infrastructure.
The vast majority of blogs and articles focus on the training process, and how using data for training is this massive breach of privacy.
No one seems to even consider the usage side of LLMs.
A friend was trying out a Generative AI tool the other, started a conversation and casually dropped some personal information. After a few messages his colleague who was working on the same project shared his screen to show him the detailed list of all the things that had been said in the conversation.
"I forgot that someone else could read that"
This higlights the fact that hundreds of millions of people globally are inputting their innermost thoughts, fears and desires into LLMs every day. They upload X-rays, wedding photos and death certificates. They paste full WhatsApp conversation histories and type in secrets they would not dare tell another human. We can debate whether that is a good or bad thing, but it sure is a massive shift in how we interact with technology.
Yesterday I wrote about AI integration: Trello to GitHub. Further to this, what about letting the LLM assess its own ability to take on a task? Imagine prompting like this:
You are an Engineering Manager. On your team you have a senior engineer, a mid-level engineer and a junior engineer. Read the following Trello card and decide who should work on it:
[card contents]
Reply in JSON format with the following fields:
assignee: one of senior, mid or junior
reason: the reason for the assignment
The LLM would then reply with something like this:
{
"assignee": "senior",
"reason": "The card is a complex feature that requires a senior engineer"
}
This gives us three grades to work with:
If the response is junior, the workflow sends it down the automated PR path where an LLM writes the code and submits the PR.
If it's mid the LLM just writes ideas for solutions, submits them in a PR with a description and a link to the Trello card, and puts it in the "ready to go" column for a human to work on.
senior goes to the "to do" column, for a human to start working on it.
Anywhere in this workflow a human can re-direct cards if they end up in the wrong place. You start small, with only one card in-flight at any time, then as your confidence in the system grows you can dial up the concurrency.
I have been using Make for integrating different apps and it's so easy to use, and works great. They also have an integration with GitHub, and it got me thinking: what if there was an agent in-between? Here's my imaginary workflow:
Create Trello card for new feature or bug fix
Move card to "In Progress"
Agent looks at card details and creates a new branch in GitHub
Agent submits a PR and moves the Trello card to "Code Review"
Another agent reviews the PR
Another agent responds to the PR
(rinse and repeat?)
Human reviews PR, can decide to rinse and repeat a few more times
PR merged
Trello card moved to "Next release"
Have you built anything like this? I'd love to hear about it! Email me at dev.blog@jonatan.blue.
Expo is a framework and a platform for building native iOS and Android using React-Native. Expo.dev is a hosted platform for building, deploying and publishing iOS and Android apps.
This is my pipeline setup with GitHub Actions and Expo.
The Expo build service is called EAS. You get 30 free builds per month (as of today), which is more than enough for weekly releases.
EAS has two types of jobs for each platform: Build and Submit. The build step packages the app up into a format accepted by the respective provider, and the submit step actually uploads the build to the provider for beta testing.
You can view the builds and submissions in the web UI, but all the pipeline triggering happens in GitHub Actions. Expo has its own automation, but as usual there are always edge cases that need extra attention and are hard to cover in a managed service.
This is the main pipeline. It runs once a week on the main branch for both iOS and Android, and can be triggered manually anytime for either or both.
.github/workflows/release.yaml:
name: Release
on:
workflow_dispatch:
inputs:
platform:
type: choice
description: "Platform to release to"
options:
- ios
- android
- all
schedule:
- cron: "30 2 * * 3"
jobs:
release:
name: 📱 build and submit mobile
runs-on: ubuntu-latest
steps:
- name: 💻 Get Code
uses: actions/checkout@v3
with:
fetch-depth: 0 # Important for accessing the complete commit history
It updates the app.json file with the new build number. We have to do that since neither Apple nor Google will accept a new build with the same version and build number as a previous build. We still control the version number in code (in the app.json file), while the build number is managed by the CI/CD pipeline using this Python script.
def get_total_commits() -> int:
"""Returns the total number of commits in the current Git repository."""
return int(subprocess.check_output(["git", "rev-list", "--count", "HEAD"]).decode().strip())
def get_android_version_code(build_number: int) -> int:
"""Returns the Android version code"""
# Android requires version codes to be unique integers
# We add 10000 as a base to avoid conflicts with legacy builds
return 10000 + build_number
def update_app_json():
"""Updates the app.json file with the new build number."""
# Fetch total commits for build number
build_number = get_total_commits()
if build_number == 1:
print("Build number cannot be 1")
sys.exit(1)
# Read the existing app.json file
try:
with open('app.json', 'r') as file:
data = json.load(file)
except Exception as e:
print(f"Error reading app.json: {e}")
sys.exit(1)
if 'expo' not in data:
print("expo key not found in app.json.")
sys.exit(1)
if 'android' not in data['expo']:
print("android key not found in app.json.")
sys.exit(1)
if 'versionCode' not in data['expo']['android']:
print("versionCode key not found in app.json.")
sys.exit(1)
if 'ios' not in data['expo']:
print("ios key not found in app.json.")
sys.exit(1)
if 'buildNumber' not in data['expo']['ios']:
print("buildNumber key not found in app.json.")
sys.exit(1)
if 'version' not in data['expo']:
print("version key not found in app.json.")
sys.exit(1)
# Update the app.json data
data['expo']['android']['versionCode'] = str(android_version_code)
data['expo']['ios']['buildNumber'] = str(build_number)
# Write the updated data back to app.json
try:
with open('app.json', 'w') as file:
json.dump(data, file, indent=2)
print("app.json has been updated successfully.")
except Exception as e:
print(f"Error writing app.json: {e}")
sys.exit(1)
if name == "main":
update_app_json()
But what happens if the build for iOS was run manually on Tuesday, when the automated build runs on Wednesday?
Excellent question! The problem here is again that the providers will reject builds with the same build version as a previous build. This means manual runs could interfere with the automated runs, and the submission step would fail as Apple/Google rejects it due to build number collision.
This is where the eas-build.py script comes in. It checks if there's already a build for the current version and handles the conflict gracefully.
If there is already a build in EAS with the same build number, we do not submit anything to the provider (Apple or Google).
Does that mean you skip the build entirely?
No. We could do that, but that might cause other problems down the line. Imagine that there are no commits for a few weeks, or even months. Then suddenly there's a critical bug that needs fixing, we jump on it, get a fix together and submit a new build. But since the last successful build was weeks or months ago, some new dependency or other change outside of our control could mean the pipeline fails. There could be multiple failures that have accumulated over time, and they now block the release. Now we have to sit down and try to understand the pipeline again, and stay up all night trying to fix it, before we can ship the bug fix!
So instead of skipping the build, we meet in the middle: run the build step, but don't submit it. This keeps the pipeline warm and alerts if the build breaks for some other reason than our code change, without firing off needless submissions that fail.
def get_last_successful_build_date(platform: str) -> str:
"""Returns the date of the last successful build from EAS."""
try:
# Get the last successful build info from EAS
result = subprocess.check_output(
["eas", "build:list", "--non-interactive", "--json", "--limit", "1", "--platform", platform]
).decode().strip()
builds = json.loads(result)
if builds and len(builds) > 0:
status = builds[0].get("status")
if status == "IN_PROGRESS":
raise RuntimeError(f"Last {platform} build is still in progress")
if status == "IN_QUEUE":
raise RuntimeError(f"Last {platform} build is still in queue")
if status == "PENDING_CANCEL":
raise RuntimeError(f"Last {platform} build is pending cancel")
if status == "NEW":
raise RuntimeError(f"Last {platform} build is new")
# The completedAt field contains the build completion timestamp
return builds[0].get("completedAt")
return None
except (subprocess.CalledProcessError, json.JSONDecodeError, KeyError) as e:
print(f"Error getting last {platform} build date: {e}")
return None
def has_new_commits_since_last_successful_build(platform: str) -> bool:
"""Returns True if there are new commits since the last successful build."""
last_build_date = get_last_successful_build_date(platform)
if not last_build_date:
# If we can't determine last build date, assume there are changes
print(f"Could not determine last {platform} build date, assuming changes needed")
return True
result = subprocess.check_output(
["git", "log", f"--since='{last_build_date}'", "--oneline"],
).decode().strip()
has_changes = bool(result)
if not has_changes:
print(f"No new commits since last successful {platform} build ({last_build_date})")
return has_changes
def build_platform(platform: str) -> None:
"""Execute the appropriate build command based on whether there are changes."""
base_command = ["eas", "build", "--non-interactive", "--no-wait", "--platform", platform]
if has_new_commits_since_last_successful_build(platform):
# Build and submit to store
command = base_command + ["--auto-submit"]
print(f"Building and submitting {platform} app to store...")
else:
# Build only (keep pipeline warm)
command = base_command
print(f"Building {platform} app without submitting (just to keep pipeline warm)...")
try:
subprocess.run(command, check=True)
print(f"Successfully initiated {platform} build")
except subprocess.CalledProcessError as e:
print(f"Error during {platform} build: {e}")
sys.exit(1)
if name == "main":
if len(sys.argv) != 2 or sys.argv[1] not in ["ios", "android"]:
print("Usage: python eas-build.py <ios|android>")
sys.exit(1)
We could just trigger this manually, but there is something rather useful about builds that run on a schedule, for a number of reasons:
Our perception of time is warped to say the least, and having a trusty machine tick away every week is a good way to make sure you remember to ship.
The schedule establishes a habit for you, in that you know every Wednesday at 02:30 AM the pipeline will run and build the app. If you have some bug fixes or new features to get out, your mind will naturally start planning based on the schedule. You have given yourself a weekly, artificial deadline.
And as mentioned before, it means that the pipeline will run even if nobody has pushed any new commits to the repo, catching build errors we might otherwise miss.
Apple reviews are notoriously arbitrary, in many regards but in particular in the time it takes to review a new build. One way to help avoid getting stuck behind the review is after you get a new version approved (say 1.0.0) to immediately bump the version (to 1.0.1) and run the iOS pipeline to submit it to TestFlight. That way you get over the first hurdle of the initial review for beta testing, which can sometimes take longer than the actual publishing review (the last step before hitting the App Store). After the first beta review, subsequent builds do not require approval until you decide to publish.
This automated pipeline setup provides several benefits:
Weekly automated builds ensure the pipeline stays healthy
Manual triggers allow for urgent releases when needed
Smart handling of build numbers prevents submission conflicts
Keeping the pipeline "warm" helps catch issues early
The combination of GitHub Actions and Expo makes for a reliable and maintainable mobile app deployment process that works well for both scheduled and on-demand releases.
Anything missing from this post? What problems do you have with your mobile app deployment pipeline? What do you think is the most annoying part of the process? I'd love to hear from you!
I attended a conference last year, where a speaker compared generative AI to electricity. The analogy stuck with me. Just as electricity transformed every industry in the early 20th century, generative AI is about to revolutionise how we work across all sectors.
Before widespread electrification, factories relied on central steam engines with complex systems of belts and pulleys to power their machinery. The introduction of electric motors allowed for more flexible, efficient, and distributed power systems. Each machine could have its own motor, operating independently when needed.
Similarly, before generative AI, many creative and analytical tasks required centralised teams of specialists. Now, these capabilities can be distributed throughout an organisation, with individual workers having access to powerful AI tools that augment their abilities.
Electricity didn't just make existing processes more efficient. It enabled entirely new categories of products and services. The same is happening with generative AI. We're not just automating existing tasks, we are discovering entirely new ways of working, creating, and solving problems.
At the same time, like early electrification, we're still figuring out the best practices and safety measures. There are valid concerns about reliability, security, and proper implementation. But just as we wouldn't dream of running a modern business without electricity, I believe future organisations won't be able to compete effectively without embracing generative AI.
How this will look in practice is still taking shape, and we are all part of the process of figuring it out.