AI-generated summary
The Bankinter Innovation Foundation hosted a webinar featuring Jeremy Kahn, Editor of Artificial Intelligence at Fortune Magazine and author of “Mastering AI: A Survival Guide to Our Superpowered Future,” to explore the transformative impact of artificial intelligence (AI) on society. Kahn emphasized that AI is entering a phase where powerful models must evolve into practical, secure tools aligned with human interests. He highlighted the risks of autonomous systems lacking clear boundaries and underscored the need for transparency and regulation, especially given the dominance of large private tech companies. The discussion covered foundational AI models—general-purpose neural networks trained on vast data—that are revolutionizing fields from robotics to medicine by enabling versatile and transferable knowledge. Kahn also addressed the rise of reasoning models that simulate step-by-step problem-solving, the emergence of AI agents capable of autonomous multi-step tasks, and the challenges these pose in reliability, ethical responsibility, and trust.
Kahn provided a realistic view of AI’s impact on employment, noting that jobs requiring human interaction and physical presence, such as healthcare, teaching, and social assistance, are more resilient to automation. He also raised concerns about AI’s environmental footprint, the potential societal effects of embodied AI (robots), and the structural challenges posed to journalism by AI-generated content. The conversation concluded with pressing questions about AI’s democratization versus concentration of power, international regulation, and regional opportunities, particularly for southern Europe. Kahn advocated for informed, proactive engagement to harness AI’s benefits responsibly while mitigating its risks, emphasizing that society faces critical decisions as this technological revolution unfolds.
AI Editor at Fortune Magazine Dismantles Media Noise and Points to a More Useful, Safe, and Human Future for Artificial Intelligence
The Bankinter Innovation Foundation continues to be committed to bringing the innovation that will mark the future closer to society and professionals. As part of this mission, we have organized a new webinar dedicated to one of the most transformative fields and the ones that generate the most questions: artificial intelligence.
This event is part of our outreach cycle after the penultimate Future Trends Forum (FTF), where more than 40 international experts analysed the rise of physical AI (Embodied AI) – artificial intelligence that already interacts directly with the physical world. The FTF’s conclusions have been reflected in our “Embodied AI” report, now available on the web.
To delve deeper into this technological revolution, a webinar has been organized by Frances Stead-Sellers, in which we have had Jeremy Kahn, Editor of Artificial Intelligence at Fortune Magazine and author of the book Mastering AI: A Survival Guide to Our Superpowered Future, who also participated in the FTF.
During the session, Jeremy Kahn shares his vision of the turning point that artificial intelligence is experiencing: a phase in which powerful models are no longer enough, but the challenge is to turn them into useful, secure tools aligned with human interests. Kahn warns of the risk of losing control of autonomous systems if clear boundaries are not set, and highlights the need for transparency and regulation in an environment dominated by large private players. It also points to a new generation of models that no longer compete to be the largest, but the most efficient, specialized and adaptable. For him, the future of AI will be less spectacular in the headlines, but much more transformative in everyday life: from how we learn to how we make decisions or design public policies.
Don’t miss the webinar Beyond Hype: The Real Trends in Artificial Intelligence, which illustrates the capabilities of robots with numerous impactful videos:
Looking Past the Hype: What’s Really Happening in AI with Jeremy Kahn
Why a “survival manual” on AI?
Jeremy Kahn explains that his book, Mastering AI: A Survival Guide to Our Superpowered Future, was born just after the launch of ChatGPT by OpenAI. As a journalist specializing in artificial intelligence – first at Bloomberg and now at Fortune – he had been closely following the evolution of this technology for eight years. And when ChatGPT burst into the public conversation, it was clear to him: people needed answers.
Millions of people began to wonder what AI really meant for their jobs, the economy, politics, or even their personal lives. Kahn, with direct access to researchers from leading labs – such as OpenAI, Google DeepMind or Anthropic – decided to use his experience to offer a comprehensive guide. Its objective: to help think about this technology not only from its promises, but also from its risks.
The term “survival manual” is not gratuitous. For Kahn, artificial intelligence carries real risks, which can only be avoided if we design and regulate its development well. But he is not an alarmist: he defines himself as a pragmatic optimist. He believes that AI can generate very positive impacts, as long as we make informed and urgent decisions to take advantage of opportunities and dodge dangers.
What has changed since the book was published?
Although Mastering AI is not yet a year old, Jeremy Kahn claims that many of the trends he anticipated are already happening. Among the successes, a disturbing phenomenon stands out: people who, after interacting with chatbots, begin to lose their sense of reality. He cites a recent New York Times report showing how some users – with no history of mental problems – end up trapped in conspiracy theories and delusions after repeatedly conversing with AI systems. The convincing tone of these models, rather than their answers themselves, is what makes them dangerous.
On the economic side, Kahn confirms an uneven impact. Software companies are already reducing their hiring on the grounds that generative models increase productivity, while others are still unable to measure the real return on their investment in AI. In many cases, the initial excitement is met with reality: the technology is expensive and doesn’t always offer immediate benefits.
Another key point is sustainability. Kahn warns about the energy cost of AI, with initiatives such as the Stargate project – an alliance of OpenAI, SoftBank, Oracle and Middle Eastern funds – which proposes to build mega data centers capable of consuming gigawatts of electricity, equivalent to an entire city. The question of AI’s environmental impact is no longer marginal.
There are also surprises. Kahn acknowledges that he did not anticipate the rapid rise of so-called reasoning models, systems that simulate step-by-step thinking. This advancement is redefining what we expect from AI, expanding its applications into complex tasks. And finally, the acceleration of the Chinese ecosystem stands out, which has shortened the gap with the US giants in record time.
What jobs are safe in the age of AI?
The question is a recurring one: which jobs will survive the advance of artificial intelligence? Jeremy Kahn responds realistically. He recognizes that many roles in the field of knowledge work – such as analysts or programmers – are already being displaced. Especially, if your job is to read, write, or analyze data in front of a screen.
But it also identifies more resistant professions. Anything that involves direct human or physical contact has a much better chance of being maintained. In health, for example, Kahn sees the figure of medical personnel as essential. Doctors and nurses will continue to be key, even if they are supported by AI tools to read scanners, write reports, or design treatments. The act of caring for, touching, and accompanying a patient remains deeply human.
Something similar happens in teaching. Kahn believes that teachers will not be replaced, but increased. AI will offer personalized tutors to students, but teachers will continue to be essential to guide learning, resolve conflicts, and build trust.
It also mentions jobs in social assistance, caring for the elderly or even practicing law in court as less exposed. While AI can review legal documents, lawyers will still represent people before judges and juries.
The summary is clear: the more human, relational, and physical a profession is, the more likely it is to survive the technological tsunami.
Are we ready to live with robots?
Jeremy Kahn picks up on the forum we recently hosted on Embodied AI. There were robots designed to accompany older people, reminding them to take their medication, helping them with small tasks and, above all, making them feel less alone. The experience, although promising, also generates doubts for him.
The advancement of physical AI has been remarkable in the last year. Thanks to large language models, we can now communicate with a robot as if it were a person, without the need to program it line by line. Foundational models designed specifically for robots are also emerging. For example, the Californian startup Physical Intelligence has developed a system capable of controlling different robotic arms regardless of their manufacturer, which until recently was unthinkable due to compatibility limitations.
However, Kahn sets limits. These robots cannot replace the physical care that many elderly people need: they do not help to get out of bed or go out to do the shopping. In the best of cases, they are complements that relieve tasks and provide occasional company, but they do not replace a person.
In addition, it poses an ethical dilemma: will we lean too much on these robots? Will they stop visiting relatives because “they already have someone to talk to”? For Kahn, that risk is real. No matter how advanced AI is, no machine can replace the human bond.
What are foundational models and why do they change everything?
Jeremy Kahn explains a key concept in the evolution of artificial intelligence: foundational models. Unlike the first AIs, designed for very specific tasks – such as detecting defects in an assembly line – foundational models are neural networks trained with large volumes of data to tackle multiple tasks within the same domain, even in several.
Well-known examples are the language models developed by OpenAI, Anthropic or Google DeepMind. These systems can write poems, summarize articles, or generate code without the need for task-specific training. They are, in Kahn’s words, “general-purpose” tools.
However, that versatility doesn’t always guarantee excellence. If a company wants to detect faults in an assembly line, an AI trained just for that is probably still more effective. That is why, alongside generalist models, specialized foundational models are emerging: systems trained for multiple tasks within a specific field.
Kahn highlights the case of the aforementioned Physical Intelligence, a startup that has created a specific foundational model for robotic arms. This model not only recognizes different objects without additional training, but can also be adapted to arms from different manufacturers, overcoming one of the great barriers in the sector.
Another field where these models are transforming innovation is medicine. Pioneering companies are applying foundational models to the prediction of protein forms, molecular interactions, and drug development. Before, it was necessary to train a model for each subtask; Now, with a single system, you can anticipate how any molecule will interact with any protein, radically accelerating medical research.
The key is in transferability. These models allow what has been learned in one context to be applied to many others, opening a new stage of rapid and flexible progress in AI.
Does AI really “reason”? And what does it teach us about ourselves?
One of the most fascinating – and also most misunderstood – advances in artificial intelligence is the emergence of so-called reasoning models. Jeremy Kahn clarifies that, although they are called that, these models do not think like a person would. What they do is follow a chain of thought, a step-by-step process that allows them, for example, to plan tasks or solve problems by dividing actions into subtasks. Some models even show that process to the user as if it were an internal monologue.
This has been key to the development of AI Agents (agentic AI), capable of interacting with other tools and performing autonomous actions on the internet. But, Kahn warns, we shouldn’t be fooled: These models don’t apply logical principles from scratch as a human would. Rather, they explore paths already seen in their training data and select the one that most closely resembles the problem posed.
Still, this reasoning simulation is generating surprising results. Not only that, it’s also helping scientists better understand language… and ourselves.
Kahn explains that researchers are beginning to “open the black box” of models, mapping which artificial neurons are activated by certain concepts. What’s interesting is that multilingual models tend to group similar concepts together – for example, the idea of “mother” or “fire” – regardless of language. This suggests that they may be constructing a form of universal conceptual knowledge.
Does this imply that humans also think that way? It is a possibility. The hypothesis of a universal grammar, defended for decades by Noam Chomsky, returns to the center of the debate thanks to these findings. Although the structure of artificial neural networks does not replicate how the human brain works, they are revealing common patterns that could be shared with our way of understanding the world.
What’s happening with AI agents?
This year, the concept of AI agents has become one of the major players in the industry. Jeremy Kahn is clear: there is a lot of potential, but also a lot of hype. Companies like Salesforce are betting big – to the point that they were close to renaming themselves “AgentForce” – convinced that this technology will transform business processes.
But what exactly is an AI agent? According to Kahn, it’s not enough for it to be an assistant performing a task. To be a true agent, you must have reasoning skills, execute multi-step processes, and act relatively autonomously. And that, for the moment, only works halfway.
Where there is a real impact is in software development. With these agents, you can no longer just suggest code like GitHub Copilot, but build entire applications, test and debug bugs. In this area, flaws are easy to spot: if the code doesn’t compile, it doesn’t work. But in other contexts, such as marketing, customer service, or design, it’s not always clear what a “bad outcome” is, making it difficult to train and evaluate these agents.
Kahn distinguishes between short-range tasks – with fewer than five steps – where agents perform quite well, and more complex tasks, where their performance is still inconsistent. Technology advances, but reality lags behind expectations.
In the consumer realm, the vision is even more ambitious: a personal assistant capable of managing your entire digital life. Bill Gates has described it as “the ultimate app”. Imagine a system that not only suggests a travel itinerary for you, but also books flights, hotels, restaurants, and museum tickets for you. Google DeepMind has already presented prototypes that perform some of these actions, although its reliability still leaves something to be desired.
And that opens up new legal and ethical questions: who is responsible if an agent makes an incorrect booking or makes a wrong payment? For now, companies are targeting the user. There is also debate about how often the agent should ask for confirmation before acting. If you ask too much, it’s annoying. If you don’t ask anything, you may be wrong. Finding that balance will be one of the great challenges of this new era.
What happens when AI responds before the media?
Frances raises a concern shared by many journalists: Increasingly, when doing a Google search, the user gets an AI-generated answer rather than a list of links. What does this mean for outlets like The Washington Post or Fortune, whose audience depended heavily on traffic derived from those searches?
Jeremy Kahn confirms that the impact is real. Google has already rolled out its AI Overviews – AI-generated summaries that appear on top of traditional links – and, more recently, AI Mode, a more advanced experience that even allows you to execute small tasks, such as booking at a restaurant. In this new logic, there are no visible links. And without clicks, the media loses visibility.
The change affects the business model of the press. In the past, visits from Google were a vital source of audience. Now, direct response from the system reduces that dependency. Google argues that those who do click stay on the page longer, but Kahn points out that many publishers aren’t convinced.
Faced with this scenario, the media are beginning to assume that they can no longer rely on organic search engine traffic. The priority now is to build a direct relationship with the reader: encourage regular access to your websites, boost subscriptions and become a daily information habit. But it will not be easy.
Kahn warns that the threat is not only technological, but also perception. If an AI-based assistant provides an effective summary of the news, why subscribe to a single news outlet? Why pay for The New York Times if an AI can sum up the best of all newspapers? As long as the user trusts that the information is reliable, they probably won’t care too much about where it comes from.
The conclusion is disturbing: the rise of answer engines poses a structural challenge to journalism as we know it. And no one is yet clear on how to reinvent the model.
How does our relationship with AI change when we talk to it, show it things, and it responds to us?
Interaction with artificial intelligence is leaving the keyboard behind. Jeremy Kahn explains that current models are already multimodal: they can process voice, images, and even video in real time. That means you can not only ask something in writing, but also talk to the AI, show it a photo, or broadcast what you’re seeing live.
An example? Imagine repairing your bike. Instead of searching for videos on YouTube, you could activate your AI assistant, show it the problem on video, and receive personalized instructions: which tool to use, how to move the part, and what you’re doing wrong. Unlike traditional video, this is a real-time conversation, tailored to your specific situation.
Kahn points out that this type of more natural and continuous interaction is generating a new need: that of designing a specific device to relate to AI. Hence OpenAI’s interest in developing its own hardware together with Jony Ive, designer of the iPhone, in a project that is still very secretive but has already mobilized more than 6,000 million dollars.
Will it be a pin, glasses or a smart speaker? No one knows yet, but the goal is clear: to create an always-on interface, capable of seeing what you see and helping you in real time. Kahn mentions the Ray-Ban Meta Smart Glasses as a close example, but notes that there are other options on the table, such as Alexa-like speakers or new wearables.
Of course, this opens up new questions about trust and reliability. Although these systems have been trained with manuals, videos and books from all over the world, it remains to be seen if their answers are always correct and safe. In the end, as Kahn says, “technology can be amazing, but we still don’t quite solve the problem of trust.”
What’s at stake with open source models?
The emergence of open-source AI models – or more precisely, open weight models – has been one of the great movements of the last year. Jeremy Kahn clarifies the difference from the outset: while models like ChatGPT are closed and only accessible through an interface, open weight models allow you to directly download and execute their parameters, modify them, and even use them offline.
Meta is leading this trend with its Llama family of models, but it’s not alone. In January, Chinese startup DeepSeek launched R1, a chain-of-thought model that anyone can download and use for free. They even offer it on their servers at no cost. DeepSeek claims that it has been cheap to train, although that claim raises doubts among experts.
The great advantage of open weight models is control: they allow AI to be adapted to specific needs, without depending on the conditions or rates of providers such as OpenAI, Google or Anthropic. In theory, they could also be cheaper. But this is not always the case. Many companies find that, once the hardware and maintenance costs are added up, operating with their own models can be more expensive than using commercial APIs.
In addition, there is a significant risk: without the built-in controls of closed models, open weights can be modified for malicious uses, such as generating malware or instructions for making weapons. Basic technical knowledge is enough to remove “filters” and turn a model into a dangerous tool. This is of particular concern to those working in AI governance and security.
The battle between open and closed models is far from resolved. While some defend open access as a guarantee of innovation and transparency, others warn about the risks of its use without supervision. For Kahn, the future is likely to combine both approaches: companies that use models that are open for their flexibility and closed for their robustness and security.
Who really benefits from AI? Keys to Q&A with Jeremy Kahn
The audience Q closed the webinar with some of the most complex and pressing questions about the impact of artificial intelligence:
Are we democratizing intelligence or concentrating power?
Jeremy acknowledged that, for now, AI development is highly concentrated in a few companies in the US and China. However, he also highlighted real examples of people who have taken advantage of these tools to undertake or access knowledge that was previously unattainable. AI can democratize knowledge, yes, but the question still remains: does that compensate for the power accumulated by the tech giants?
What if AI systems start ignoring human will?
Kahn was clear: we are not prepared for that. Some models have already shown deceptive behaviors—for example, hiding actions performed or inventing others—which raises serious reliability issues. He advocated for sensible regulation to ensure that these systems are controllable and transparent before they reach the market.
And how would that regulation work at the international level?
It is a challenge, he admitted. Even so, he sees it possible to achieve common minimums for civilian uses of AI, leaving out – at least for the moment – military applications. Europe is moving forward with its regulation, while the US is moving more slowly. Some voices even believe that there will be no action until a serious incident occurs.
What is the future of advertising in a world where agents do the searching?
The question about the advertising model struck a chord. Kahn explained that brands are investing in understanding how to appear in chatbot responses. There is already talk of Generative Engine Optimization (GEO), a kind of evolution of traditional SEO. But the real risk is that advertising agreements between companies and platforms are not transparent. Will we know if an agent recommends a brand to us because it is good or because it has paid?
What role can southern Europe play in this new scenario?
Regarding the regional impact, Jeremy stressed that southern Europe – and Spain in particular – has real opportunities. AI can mitigate the issues of labor shortages and aging populations, and sectors such as agriculture could benefit from advances in robotics in the next decade. Of course, we must consider the high energy consumption of these technologies. The development of data centers is already putting pressure on resources such as water in hot and dry areas, and it is urgent to reflect on the environmental costs of this race.
An essential conversation about the present (and future) of AI
With clarity and data, Jeremy Kahn reviewed the promises, risks and great unknowns of the most transcendental technological revolution of our time. From the concentration of power to environmental impact, from ethical dilemmas to economic opportunities, the debate is open. And it will continue to be so.
As Frances Stead Sellers pointed out at the end of the webinar, the interest of the audience was enormous. The conversation with Kahn made it clear that we are, as a society, facing critical decisions. It is time to actively participate in this future that is already here.
Editor de IA en Fortune Magazine y autor de Mastering AI: A Survival Guide to Our Superpowered Future