Image generated with Copilot Designer
The AI Act Needs an Updated Cyber Resilience Act
In this text, I argue that the AI Act needs us to update the Cyber Resilience Act. The proposal to regulate the cybersecurity requirements of products with digital elements, known as the Cyber Resilience Act, bolsters cybersecurity rules to ensure more secure hardware and software products – we must update it. Unlike GDPR, AI is machines, not words or people. And the Internet of Things (IoT) is the hands and feet of AI. Finally, at the least, devices must have a minimum level of security before they go to market. While this is an important first step, even more is necessary. We need an agency for infrastructure and hardware.
The use of AI in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. (Link)
A first-ever EU-wide legislation of its kind, it introduces mandatory cybersecurity requirements for products with digital elements, throughout their entire lifecycle. (Link)
AI is Like Electricity
AI is not the goal to improve any practice, it is a means to overlay the world like electricity. After its first important step, IoT has now come to its logical presence as the hands and feet of AI.
In that sense, AI is nothing new. It is a continuation of the machine learning operating in the databases of objects connected through internet or intranet protocols. It is a scale that determines formative effects. In the early 2000s, the cloud enabled IoT. Before that, in the 80s and 90s, IoT projects were demos only because the storage capacity was simply too expensive. In the early 2020s machine learning evolved into such powerful algorithms to handle the enormous data sets producing ChatGPT and its successors.
We should have had the discussion that we are having right now on AI around 2000 when we sowed the seeds for our current situation. The digital transition as we know it today is the story of remote sensing. For centuries, objects (i.e., things treated and developed by humans – were silent about their material conditions. Natural things rumbled sometimes, changed condition, shape, texture, and odor, and communicated as if they were about their state. Even if this communication was not directed at humans, humans could learn from it and devise ways of handling this changing scenery.
It was not until the Industrial Revolution that objects were being produced in the co-composite form of materials capable of resonating, that these objects became capable of stating facts about their condition. This information is then used to predict the behavior of these objects. At this stage, the three main issues that arise with this capability are: 1) Is the data coming from the object correct, 2) What (or who) is the intermediary that translates what is radiating from the device into information, and 3) What (or again who) is interpreting the data? These questions stem from the early beginning of predictive maintenance in the mid-1950s. They are still the key questions today.
Predictive Maintenance Kickstarted the Digital Revolution
Predictive maintenance, predicting behavior from machines in factories, kickstarted the digital revolution as we know it. It created a mindset that the conditions of objects could be monitored, and analyzed and provide feedback and suggestions for further use. This mindset found two main computing domains and trends: pervasive and ubiquitous computing. Both are still alive today but as an integral part of IoT, a term coined by Kevin Ashton, an RFID product manager at Proctor & Gamble. Pervasive e-computing clustered around IBM, and ubiquitous computing centered around Xerox Parc.
Ubiquitous means everywhere. Pervasive means "diffused throughout every part of." In computing terms, those seem like somewhat similar concepts. Ubiquitous computing would be everywhere, and pervasive computing would be in all parts of your life. (Source: Google)
The seminal text that catches both meanings is Mark Weiser's The Computer for the 21st Century, with its much-quoted beginning: The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it. (Link)
This is the key to what IoT is and it is quite unbelievable for a forty-year-old vision to have come true as it means connecting every – now most – objects on the planet being digitally addressable, even if that means in most cases current passively, being able to be read by an RFID reader, intelligent cameras, or drones. The drive towards full connectivity was there, right from the beginning. It was most articulate voiced by a European research program called The Disappearing Computer:
The Disappearing Computer (DC) is an EU-funded proactive initiative of the Future and Emerging Technologies (FET) activity of the Information Society Technologies (IST) research program (2001-2003) The mission of the initiative is to see how information technology can be diffused into everyday objects and settings and to see how this can lead to new ways of supporting and enhancing people's lives that go above and beyond what is possible with the computer today. Specifically, the initiative focuses on three interlinked objectives:
Create information artifacts based on new software and hardware architectures that are integrated into everyday objects.
Look at how collections of artifacts can act together, to produce new behavior and new functionality.
Investigate the new approaches for designing collections of artifacts in everyday settings, and how to ensure that people's experience in these new environments is coherent and engaging. (Link)
The AI Debate
It is clear that these questions relate to what is now called the AI debate, yet they became tied to the applications and services of IoT and its reference architectures. The basic, and main, question of how we want to live together with an intelligent environment was never really part of a general discussion, yet had it been properly addressed we would have been able to embed AI into a clear framework of real operational value – the actual devices – instead of regulating it.
I was at one of the kick-off sessions of the project and in Jonschoping 2000 I first encountered the vision:
Electricity was the actual metaphor that the EU's first project officer, Jakub Wechjert, used. He spoke of a vision of the future as one in which our everyday world of objects and places becomes ‘infused’ and ‘augmented’ with information processing. Computing, information processing, and computers disappear into the background and take on a role more similar to that of electricity today - an invisible, pervasive medium distributed in or real world. In contrast, what will appear to people are new artifacts and augmented places that support and enhance activities in natural, simple, and intuitive ways.
That, however, does not make it more unproblematic. What we encounter in such an environment is the problematic and futile attempts to claim any which one – subject/ time/ space/ place – as an undisputed starting point for making meaning or sense, for deciding on how to act, for recalling how previous procedures operated, for projecting a sense of self into the future. In a mediated environment, it is no longer clear what is being mediated, and what mediates. Such environments - your kitchen, living room, our shopping malls, cobbled streets in old villages, are new beginnings as they reformulate our sense of ourselves in places in spaces in time. As new beginnings, they begin new media.
I was dozing off in the big conference hall, thinking about all these new beginnings, this longing for new space to occupy as if it was the wild, Wild West. What worried me most were some rather satisfied minds. I too could visualize a setting in which people resonate with media through simulating processes. Simulating processes that are actual processes, for in a digitized real, any process might become experiential, might resonate. Then a speaker, I believe it was Streitz, came on stage. He spoke of a Bluetooth ring that whenever I walked in the woods could – if I so liked – enhance this walk for me (I wondered who needs to enhance a wood?) by activating a mechanism that would either reveal a screen near the tree or send information on a handheld computer. And on that screen, I could read some more about that tree.
I was wide awake and I felt very strange. I looked around me, searching for any human presence in that lecture room; to wink at me, and tell me it was all a big sick joke. I recalled my sword and King Arthur and my talking trees. No screens there. That was when I realized. I asked myself could some of what these people are talking about actually be dangerous? The best thing I can do is stay close to them, track what they are interested in, and either hack it or try to confuse the spaces in which they operate. (Link)
From my tone, you can gather that I was still in shock. Here was a vision of a world presented in which computing would disappear into the known environment as if computing were a set of democratic processes which it was and is not. It was not even a naive decision, it was completely taken for granted, even if the audience was not only filled with engineers but with designers and ethnographers as well. The very notion of AI was already entangled in that vision and program. At the heart of decision making which up until that point had been a purely human toolset, however, it took skills that were potentially within the reach of any person (arguably a problematic statement given the distance between public and elite).
It was logical that a general audience could not see something as big as pervasive computing, ubiquitous computing, or IoT. They saw bits and pieces, fragments of new connectivity like RFID gradually entering the store and clothes and smart thermostats like NEST, or smart doorbells and cameras in the home.
The Kaczynski Impact
But maybe one of the reasons why there never was a solid debate about what it meant to overlay the world entirely with a digital layer addressable and addressed to commercial and government was that that position was made extremely suspect by Ted Kaczynski, the Unabomber. A core issue in his thinking is his distinction between small-scale technology is technology “that can be used by small-scale communities without outside assistance” and “organization-dependent technology is technology that depends on large-scale social organization.”
According to him, there are no significant cases of regression in small-scale technology, “but organization-dependent technology does regress when the social organization on which it depends breaks down.” His two major assumptions though are very true, the first being that “if the use of a new item of technology is initially optional, it does not necessarily remain optional, because the new technology tends to change society in such a way that it becomes difficult or impossible for an individual to function without using that technology.”
The second is that he foresaw that the system may break down and if it does break down “the consequences will still be very painful. But the bigger the system grows the more disastrous the results of its breakdown will be. So if it is to break down it had best break down sooner rather than later.” If it breaks down “there may be a period of chaos, a “time of troubles.” It would be impossible to predict what would emerge from such a time of troubles, but at any rate, the human race would be given a new chance.
He foresaw that whoever lives in the interface of technical applications will only be able to innovate within the limitations of that same interface. Progress will be made using the guidelines of the interface, building new applications with the same rules. From this, it follows that whoever is first, is the winner that takes it all. This not only shows on the application and product level, see Apple, Google, and Microsoft, but on the very notion of commercial applications and services taking over critical services from the state on connectivity, energy, and communications. This model, this interface, has become so logical and normal that it is hard to even consider deviating from it.
Yet it is not hard to see that his devastating methods of bringing his message were responsible that with the death and destruction, his analysis did not take root. This is a pity as it has led to – on the one hand, a state-controlled system where innovation is subject to strategic state planning in China – and on the other hand, a US amalgam of for-profit companies with diverse and widely-ranging interests. What has become viewed as nonviable is a mix of both – commons-led innovation, including infrastructure supported by commercial initiatives. Picture the Apple model as a commons. The mobile phone is in public hands (it sets hardware and software rules), the App Store is an EU App Store closing which apps run, and new open source applications for search, social media, and real-time communication can be picked from the work of Next Generation Internet hundreds of projects. How logical can it be?
We Need to Update the Cyber Resilience Act
At the moment, there are about 17 Directives, Regulations, and Acts that aim to reclaim European digital sovereignty. What if you regulate hardware instantiations like the mobile phone, you would immediately gain digital autonomy and democratic control over its operating system, its capacity to host the EU Wallet safely and securely, and its app ecosystem. We think that updating the Cyber Resilience Act to describe devices, routers, and mobile phones is a good start.