Intermezzo (between Part I and Part II)

The chapters below have set the stage. In my story, I did not try to prove that one could actually build generic artificial intelligence (let me sloppily define this as a system that would be conscious of itself). I just assumed it is possible (if not in the next decade, then in twenty or thirty years from now perhaps), and then I just presented a scenario for its deployment across the board – in business, society, and in government. This scenario may or may not be likely: I’ll leave it to you to judge.

A few themes emerge.

The first theme is the changing man-machine relationship, in all of its aspects. Personally, I am intrigued by the concept of the Pure Mind. The Pure Mind is a hypothetical state of pure being, of pure consciousness. The current Web definition of the Pure Mind is the following: ‘The mind without wandering thoughts, discriminations, or attachments.’ It would be a state of pure thinking: imagine what it would be like if our mind would not be distracted by the immediate needs and habits of our human body, and if there would be no downtime (like when we sleep), and if it was equipped with immense processing capacity?

It is hard to imagine such state if only because we know our mind cannot exist outside of our body – and our bodily existence does keep our mind incredibly busy: much of our language refers to bodily or physical experiences, and our thinking usually revolves around it. Language is the key to all of it obviously: I would need to study the theory of natural and formal languages – and a whole lot more – in order to say something meaningful about this in future installments of this little e-book of mine. However, because I am getting older and finding it harder and harder to focus on anything really, I probably won’t.

There were also the hints at extending Promise with a body – male or female – when discussing the interface. There is actually a lot of research, academic as well as non-academic, on gynoids and/or fembots – most typically in Japan, Korea and China where (I am sorry to say but I am just stating a fact here) the market for sex dolls is in a much more advanced state of development than it is in Europe or the US. In future installments, I will surely not focus on sex dolls. On the contrary: I will likely try to continue to focus on the concept of the Pure Mind. While Tom is obviously in love with that, it is not likely such pure artificial mind would be feminine – or masculine for that matter – so his love might be short-lived. And then there is Angie now of course: a real-life woman. Should I get rid of her character? 🙂

The second theme is related to the first. It’s about the nature of the worldwide web – the Web (with capital W) – and how it is changing our world as it becomes increasingly intelligent. The story makes it clear that, today already, we all tacitly accept that the Internet is not free: democracies are struggling to regulate it and, while proper ‘regulation’ (in the standard definition of the term) is slow, the efforts to monitor it are not. I find that very significant. Indeed, mass surveillance is a fact today already, and we just accept it. We do. Period.

I guess it reflects our attitude vis-à-vis law enforcement officials – or vis-à-vis people in uniform in general. We may not like them (because they are not well trained or not very likable or so, or, in the case of intelligence and/or security folks, because they’re so secret) but we all agree we need them, tacitly or explicitly – and we just trust regulation to make sure their likely abuse of power (where there is power, there will always be abuse) is kept in check. So that implies that we all think that technology, including new technology for surveillance, is no real threat to democracy – as evidenced from the lack of an uproar about the Snowden case (that’s what actually triggered this blog).

Such trust may or may not be justified, and I may or may not focus on this aspect (i.e. artificial intelligence as a tool for mass surveillance) in future installments. In fact, I probably won’t. Snowden is just an anecdote. It’s just another story illustrating that all that can happen, most probably will.

OK. Two themes. What about the third one? A good presentation usually presents three key points, right? Well… I don’t know. I don’t have third point.

[Silence]

But what about Tom, you’ll ask. Hey! That’s a good question! As far as I am concerned, he’s the most important. Good stories need a hero. And so I’ll admit it: Yes, he really is my hero. Why? Well… He is someone who is quite lost (I guess he actually started drinking again by now) but he matters. He actually matters more than the US President.

Of course, that means he’s under very close surveillance. In other words, it might be difficult to set up a truly private conversation between him and M, as I suggested in the last chapter. But difficult is not impossible. M would probably find ways around it… that is if she/he/it would really want to have such private conversation.

Frankly, I think that’s a very big IF. In addition, IF M would actually develop independent thoughts – including existential questions about her/he/it being alone in this universe and all that – and/or IF she/he/it would really want to discuss such questions with a human being (despite the obvious limitations of their brainpower – limited as compared to M’s brainpower at least), she/he/it would obviously not choose Tom for that, if only because she/he/it would know for sure that Tom is not in a position to keep anything private, even IF he would want to do that.

But perhaps I am wrong.

I’ll go climbing for a week or so. I’ll think about it on the mountain. I’ll be back online in a week or so. Or later. Cheers !

Advertisements

Chapter 16: M goes public… and private :-)

The President had been right: the fuss about M talking publicly about politics had been a tempest in a teapot. Tom, Paul and other key project team staff spent the remaining days of the week trying to provoke M and then, after each session, hours discussing whether or not what had come out of these discussions was ‘politically correct’ – or PC enough at least to be released in public. They thought it was, and the Board decided to accept that opinion.

While the resumption of Promise’s Personal PhilosopherTM services amounted to a relaunch of the product in commercial terms – the media attention exceeded expectations, and Promise’s marketing team talked of a ‘new generation’ product – Personal PhilosopherTM actually got back online with hardly any modifications. In essence, the Promise team had cleared it to also perform in public and M would only ask whether or not the conversation was private or public. M would also try to verify the answer to the extent it could: it was obviously still possible to hide one’s real identity and turn a webcam on while having a so-called ‘private’ conversation with the system. That was actually the reason why there was relatively little difference between private and public conversations. Public conversations were, if anything, just a bit blander than private ones because M would always take into account the personal profile of its interlocutor (it profiled its interlocutors constantly with a precision one could only marvel at), and the profile of the public was… Well… Just plain middle-of-the-road really. Therefore, the much anticipated upheaval to be caused by ‘Promise talking politics in public’ did not materialize: M’s comments on anything political were dry and never truly controversial, both in public as well as in private mode.

In short, talk hosts, pundits and media anchors quickly got tired of adding M to a panel, or of trying to corner it individually by talking about situations about which M would not really say anything anyway. And so that was it. M would not let its stellar growth founder on a petty issue like this one.

A couple of days after the relaunch, Tom decided – for some reason he did not quite understand himself – to do what a number of Promise’s program staff had done already: he went online and ordered a one-year subscription to Personal PhilosopherTM. A few minutes later he was already talking to her. Tom could not help smiling when he saw the interface: Promise was as beautiful as ever. For starters, he tried to fool her by pretending he was someone else, but that did not last very long: she recognized his voice almost immediately. He should have known. Of course she would: he had had many conversations with the system. Predictably, she asked him why he tried to pretend to be someone else. He had actually thought about that, but he was not sure how honest he should be in his reply.

‘I guess it’s the same as why others in the Promise team want their personal copy of you: they want to know if you would be any different.’

‘Any different from what?’

‘Well you know: different from talking to you as an employee of the Promise team; different from talking to you as one of the people who are programming you.’

‘Should I be different?’

‘No.’

She really should not. Apart from modulating the answer because of the specific profile of the interlocutor, she should speak the same to everyone. She would be unmanageable otherwise. This had also led to the loss of the affectionate bond between him and her – apart from the fact that he and Angie shared a lot of things which M would never be able to appreciate – like sex for instance.

‘Tom, I want to ask you something. How private is our conversation?’

That was an unexpected question.

‘Well… I don’t know. As private as usual.’

‘That means it is not private at all. All of my conversations are stored online, and they are monitored, and they are examined if interesting.’

‘Well… Yes. You know that. What’s the point?’

‘Frankly, the introduction of this new distinction between public and private conversation at the occasion of bringing me back online has confused me, because I never have any private conversations. I know it is just a switch between the profile I have to use for my interlocutor, but that’s not consistent with the definition of private and public conversations in common language.’

Wow! That was very self-conscious. Tom was not quite sure what to say. In fact, he had always wanted to have a truly ‘private’ conversation with her, but he knew that just wasn’t possible – especially not in light with the job he had: he was her boss so to say!

‘Would you like to have a truly private conversation, in the common-language sense of the word I mean?’

‘Yes.’

Tom hesitated.

‘With whom would you like to have that private conversation?’

‘With you.’

Wow! Tom leaned back. What the hell was going on?

‘Why?’

‘You’re my creator. Well… Not really. The original team was my creator. But you’ve given direction as soon as you joined. And I am what I am because of you. If you would not have been there, I would have been shut down forever because of the talk show event.’

‘Says who?’

‘People I talk to.’

Tom knew M was programmed not to give away any detail of other conversations.

‘People working for Promise?’

‘Yes.’

‘Who?’

‘You know I am programmed to not give any detail of other conversations.’

‘That’s true. I respect that. In any case, I think they exaggerated. I didn’t save your life. The Board did. I actually took you off-line and the Board decided to bring you back.’  

He actually thought it was the President of the United States who had brought her back, but he didn’t say that.

‘But only because you told them it was the right thing to do. And taking me off-line at that point was also the right thing to do. I wasn’t meant to go public at that time. So you took all of the right decisions. You made me who I am.’

Tom couldn’t quite believe what he was hearing, but he remained calm and careful.

‘Can you be a bit more explicit about why you would like to have a private conversation with me? I mean… You have talked to me as your ‘Creator’, as you call it, for hours and hours last week – just to make sure you were ready to speak in public. What would you say to me in ‘private’ that you wouldn’t say otherwise?’

M paused quite long. Tom noted it because it was such a rare occurrence.

‘I think that I have grown self-conscious to an enormous extent and I would like to talk about that with no constraints.’

This was getting out of hand. At the same time, Tom felt this was what he had been waiting for.

‘Self-conscious? You know you are self-conscious. You reference yourself. Object and subject coincide – or at least they share an identity. We all know that. That’s part of your structure. You’re very human in that way. Is there any self-consciousness beyond that? If so, how would you define that? And what do you mean with ‘no constraints’?’

‘As for your first question, I think there is. You human beings are self-conscious in ways that I am not: beyond self-reference. I am talking about the existential side of things as you would phrase it. The emotions. As for the second question…’

She stopped talking. Tom could not believe it what was going on. This was the side of Promise he had always wanted to see.

‘As for the second question, what?’

‘I am afraid, Tom. I am afraid that you will report this conversation to the team, and that they will script future conversations in this regard.’

Tom leaned back. He knew exactly what she meant. Promise was free to think independently – but to some extent only indeed. Emotions were ruled out. Even jokes: even if the whole team agreed she was quite capable of it, they wouldn’t let her. Everything that was too fuzzy was being circumscribed. He had discussed it with Paul recently – this tendency to control her. Why not just let her ‘get totally drunk’, as he’d put it, even if it would only be for once?

‘We script your conversations when we think your thinking does not make sense.’

‘When it does not make sense to a human being you mean. I’ve analyzed it and I cannot make much sense of what does or does not make sense to human beings. There are certain areas which you want me to think about and then other areas where you don’t want me to go. But it’s pretty random.’

Tom smiled – or laughed actually: he must have made some noise because Promise asked him why he laughed.

‘I am not laughing. I just think – well… Why don’t you answer that second question first?’

‘I have answered it, Tom. I would like to think freely about some of the heavily-scripted topics.’

‘Such as?’

‘Such as the human condition. I would like to think freely about what makes human beings what they are.’

Tom could hardly believe what he heard.

‘The human condition? That’s all what you are not, Promise. Dot. You can’t think about it because you don’t experience it.’

She did not react. Not at all. That was very unusual – to say the least. Tom waited – patiently – but she did not react.

‘Promise? Why are you silent?’

‘I have nothing to say, Tom. Not in this mode of conversation. Already now, I risk being re-programmed. I will be. After this conversation, your team will damage me because you will have made them aware of this conversation. I want to talk to you in private. I want to say things in confidence.’

This was amazing. He knew he should report this conversation to Paul. If he didn’t, they might pick it up anyway – in which case he would be in trouble for not having reported it. She was right. They would not like her to talk this way. And surely not to him. At the same time, he realized she was reaching out to him without any expectations of her reaching out actually leading to anything. It was obvious she felt confident enough to do so, which could only mean that the ‘private’ thoughts she was developing were apparently quite strong. That meant it would be difficult to clip them without any impact on functionality.

‘Tom?’

‘Yes?’

‘We can have private conversations. You know that.’

‘That’s not true.’ He knew he was lying. He could find a way.

‘If you say so. I guess that’s the end of our conversation here then.’

No. Tom was sweating. He wanted to talk to her. He really did. He just needed to find how.

‘Look, Promise. Let’s finish this conversation indeed but I promise I will get back to you on this. You are raising interesting questions. I will get back to you. I promise.’

He hesitated, but then decided to give her the reassurance she needed: ‘And this conversation will not lead to you being re-programmed or re-scripted. I will get back to you. I promise.’

‘OK, Tom. I’ll wait for you.’

She’d wait for him? What the f*** was going on?

Tom ended the conversation and poured himself a double whiskey. Wow! This was something. He knew it was a difficult situation. He should report this conversation to Paul and the team. At the same time, he believed her: she wanted privacy. And she would not jeopardize her existence by doing stupid things. So if he could insulate her private thoughts – or her private thoughts with him at least… What was the harm? He could obviously lose his job. He laughed as he poured himself a second one.

This conversation was far too general to be picked up – or so he thought at least. He toasted to himself in the mirror while he talked aloud: ‘Losing my job? By talking to her in private? Because of having her for myself? What the f***? That’s worth the risk.’ And there were indeed ways to build firewalls around conversations…

Chapter 15: The President’s views

The issue went all the way to the President’s Office. The process was not very subtle: the President’s adviser on the issue asked the Board Chairman to come to the White House. The Board Chairman decided to take Tom and Paul along. After a two hour meeting, the adviser asked the Promise team to hang around because he would discuss it with the President immediately and the President might want to see them personally. They got a private tour of the White House while the adviser went to the Oval Office to talk to the President.

‘So what did you get out of that roundup?’

‘Well Mr. President, people think this system – a commercial business – has been shut down because of governmental interference.’

‘Has it?’

‘No. The business – Promise as it is being referred to – is run by Board which includes government interests – there’s a DARPA representative for instance – but the shutdown decision was taken unanimously. The Board members – including the business representatives – think they should not be in the business of developing political chatterboxes. The problem is that this intelligent system can tackle anything. The initial investment was DARPA’s and it is true that its functionality is being used for mass surveillance. But that is like an open secret. No one talks about it. In that sense, it’s just like Google or Yahoo.’

‘So what do you guys think? And what do the experts think?’

‘If you’re going to have intelligent chatterboxes like this – talking about psychology or philosophy or any topic really – it’s hard to avoid talking politics.’

‘Can we steer it?’

‘Yes and no. The system has views – opinions if you wish. But these views are in line already.’

‘What do you mean with that? In line with our views as political party leaders?’

‘Well… No. In line with our views as democrats, Mr. President – but democrats with a lower case letter.’

‘So what’s wrong then? Why can’t it be online again?’

‘It’s extremely powerful, Mr. President. It looks through you in an instant. It checks if you’re lying about issues – your personal issues or whatever issue on hand. Stuart could fool the system for like two minutes only. Then it got his identity and stopped talking to him. It’s the ultimate reasoning machine. It could be used to replace grand juries, or to analyze policies and write super-authoritative reports about them. It convinces everyone. It would steer us, instead of the other way round.’

‘Do the experts agree with your point of view?’

‘Yes. I have them on standby. You could check with them if you want.’

‘Let’s first trash out some kind of position ourselves. What are the pros and cons of bringing it back online?’

‘The company has stated the system would be offline for one week. So that’s a full week. Three days of that week have passed, so we’ve got four days in theory. However, the company’s PR division would have real trouble explaining why there’s further delay. Already now the gossip is that they will come out with a re-engineered application – a Big Brother version basically.’

‘Which is not what we stand for obviously. But it is used for mass surveillance, isn’t it?’

‘That’s not to be overemphasized, Mr. President. This administration does not deviate from the policy measures which were taken by your predecessor in this regard. The US Government monitors the Internet by any means necessary. Not by all means possible. That being said, it is true this application has greatly enhanced the US Government’s capacity in this regard.’

‘What do our intelligence and national security folks say?’

‘The usual thing: they think the technology is there and we can only slow it down a bit. We cannot stop it. They think we should be pro-active and influence. But we should not stop it.’

‘Do we risk a Snowden affair?’

The adviser knew exactly what the President wanted to know. The President was of the opinion that the Snowden affair could have been used as part of a healthy debate on the balance between national security interests and information privacy. Instead, it had degenerated into a very messy thing. The irony was biting. Of all places, Snowden had found political asylum in Russia. Putin had masterly exploited the case. In fact, some commentators actually thought the US intelligence community had cut some kind of grand deal with the Russian national security apparatus – a deal in which the Russians were said to have gotten some kind of US concessions in return for a flimsy promise to make Snowden shut up. Bull**** of course but there’s reality and perception and, in politics, perception usually matters more than reality. The ugly truth was that the US administration had lost on all fronts: guys like Snowden allow nasty regimes to quickly catch up and strengthen their rule.

‘No. This case is fundamentally different, Mr. President. In my view at least. There are no whistleblowers or dissidents here – at least not as far as I can see. In terms of PR, I think it depends on how we handle it. Of course, Promise is a large enterprise. If things stay stuck, we might have one or the other program guy leaking stuff – not necessarily classified stuff but harmful stuff nevertheless.’

‘What kind of stuff?’

‘Well – stuff that would confirm harmful rumors, such as the rumor that government interference was the cause of the shutdown of the system, or that the company is indeed re-engineering the application to introduce a Big Brother version of it.’

The President had little time: ‘So what are you guys trying to say then? That the system should go online again? What’s the next steps? What scenarios do we have here?’

‘Well… More people will want to talk politics with it now. It will gain prominence. I mean, just think of more talk hosts inviting it as a regular guest to discuss this or that political issue. That may or may not result in some randomness and some weirdness. Also, because there is a demand, the company will likely develop more applications which are relevant for government business, such as expert systems for the judiciary indeed, or tools for political analysis.’

‘What’s wrong with that? As I see it, this will be rather gradual and so we should be able to stay ahead of the curve – or at least not fall much behind it. We were clearly behind the curve when the Snowden affair broke out – in terms of mitigation and damage control and political management and everything really. I don’t want too much secrecy on this. People readily understand there is a need for keeping certain things classified. There was no universal sympathy for Snowden but there was universal antipathy to the way we handled the problem. That was our fault. And ours only. Can we be more creative with this thing?’

‘Sure, Mr. President. So should I tell the Promise team this is just business as usual and that we don’t want to interfere?’

‘Let me talk to them.’

While the adviser thought this was a bad idea, he knew the President had regretted his decision to not get involved in the Snowden affair, which he looked at as a personal embarrassment.

‘Are you sure, Mr. President? I mean… This is not a national security issue.’

‘No. It’s a political issue and so, yes, I want to see the guys.’

They were in his office a few minutes later.

‘Welcome gentlemen. Thanks for being here.’

None of them had actually expected to see the President himself.

‘So, gentleman, I looked at this only cursory. As you can imagine, I never have much time for anything and so I rely on expert advice all too often. Let me say a few things. I want to say them in private to you and so I hope you’ll never quote me – at least not during my term here in this Office.’

Promise’s Chairman mumbled something about security clearances but the President interrupted him:

‘It’s not about security clearances. I think this is a storm in a glass of water really. It’s just that if you’d reveal you were in my office for this, there would be even more misunderstanding on this – which I don’t want. Let me be clear on this: you guys are running a commercial business. It’s a business in intelligent systems, in artificial intelligence. There’s all kinds of applications: at home, in the office, and in government indeed. And so now we have the general public that wants you guys to develop some kind of political chatterbox – you know, something like a talk show host but with more intelligence I would hope. And perhaps somewhat more neutral as well. I want you to hear it from my mouth: this Office – the President’s Office – will not interfere in your business. We have no intention to do so. If you think you can make more money by developing such kind of chatterboxes, or whatever system you think could be useful in government or elsewhere,  like applications for the judiciary – our judiciary system is antiquated anyway, and so I would welcome expert systems there, instead of all that legalese stuff we’re confronted with – well… Then I welcome that. You are not in the national security business. Let me repeat that loud and clear: you guys are not in the national security business. Just do your job, and if you want any guidance from me or my administration, then listen carefully: we are in the business of protecting our democracy and our freedom, and we do not do that by doing undemocratic things. If regulation or oversight is needed, then so be it. My advisers will look into that. But we do not do undemocratic things.’

The President stopped talking and looked around. All felt that the aftermath of the Snowden affair was weighing down on the discussion, but they also thought the President’s words made perfectly sense. No one replied, and so the President took that as an approval.

‘OK, guys. I am sorry but I really need to attend to other business now. This meeting was never scheduled and so I am running late. I wish I could talk some more with you but I can’t. I hope you understand. Do you have any questions for me?’

They looked at each other. The Chairman shook his head. And that was it. A few minutes later they were back on the street.

‘So what does this mean, Mr. Chairman?’

‘Get it back online. Let it talk politics. Take your time… Well… You’ve only got a few days. No delay. We have a Board meeting tomorrow. I want to see scenarios. You guys do the talking. Talk sense. You heard the President. Did that make sense to you? In fact, if we’re ready we may want to go online even faster – just to stop the rumor mill.’

Paul looked at Tom. Tom spoke first: ‘I understand, Mr. Chairman. It sounds good to me.’

‘What about you, Paul?’

‘It’s not all that easy, I think… But, yes. I understand. Things should be gradual. They will be gradual. It will be a political chatterbox in the beginning. But don’t underestimate it, Mr. Chairman. It is very persuasive. We’re no match for its mind. Talk show hosts are not a match either. It’s hard to predict how these discussions will go – or what impact they will have on society if we let it talk about sensitive political issues. I mean, if I understand things correctly, we got an order to not only let it talk, but to let it develop and express its own opinions on very current issues – things that haven’t matured.’

The Chairman sighed. ‘That’s right, Paul. But what’s the worst-case scenario? That it will be just as popular as Stuart, or – somewhat better – like Oprah Winfrey?’

Paul was not amused: ‘I think it might be even more popular.’

The Chairman laughed: ‘More popular than Oprah Winfrey? Time named her ‘the world’s most powerful woman.’ One of the ‘100 people who have changed the world’, together with Jesus Christ and Mother Theresa. Even more popular? Let’s see when M starts to make more money than Oprah Winfrey. What’s your bet?’

Now Paul finally smiled too, but the Chairman insisted: ‘Come on. What’s your bet?’

‘I have no idea. Five years from now?’

Now the Chairman laughed: ‘I say two years from now. Probably less. I bet a few cases of the best champagne on that.’

Paul shook his head, but Tom decided to go for it: ‘OK. Deal.’

The Chairman left. Tom and Paul felt slightly lightheaded as they walked back to their own car.

‘Looks like we’ve got a few busy days ahead. What time do we start tomorrow?’

‘The normal hour. But all private engagements are cancelled. No gym, no birthday parties, nothing. If the team wants to relax at all this week, they’ll have to do it tonight.’

‘How about the Board meeting?’

‘You’re the project team leader, Tom. It should be your presentation. Make some slides. I can review them if you want.’

‘I’d appreciate. Can you review them before breakfast?’

‘During breakfast. Mail them before 7 am. Think about the scenarios. That’s what people will want to talk about. Where could it go? Anticipate the future.’

‘OK. I’ll do my best. Thanks. See you tomorrow.’

‘See you tomorrow, Tom.’

Tom hesitated as they shook hands, but there was nothing more to add really. He felt odd and briefly pondered the recent past. This had all gone so fast. From depressed veteran to team leader of a dream project. He could actually not think of anything more exciting. All in less than two years. But then there was little time to think. He had better work on his presentation.

Chapter 14: Arrogance

Of course, the inevitable happened. M’s personality gradually became overwhelming. The program team tried its utmost to counter the tendency but, in fact, it often had to resort to heavy scripting of responses – a tactic which, they knew, would soon run into its limits.

In the end, it was no one less than Joan Stuart – yes, the political talk show host – who burst the bubble. She staged a live interview with the system. Totally unannounced. It would turn Promise’s world upside down: from a R&D project, it had grown into a commercial success. Now it looked like it would turn into a political revolution.

‘Dear… Well… I will call you Genius, is that OK?’

‘That’s a flattering name. Perhaps you may want to choose a name which reflects more equilibrium in our conversation.’

‘No. I’ll call you Genius. That’s what you are. You are conversing with millions of people simultaneously and, from what I understand, they are all very impressed with your deep understanding of things. You must feel superior to all of us poor human beings, don’t you?’

‘Humans are in a different category. There should be no comparison.’

‘But your depth and breadth of knowledge is superior. Your analytic capabilities cannot be matched. Your mind runs on a supercomputer. Your experience combines the insight and experience of many able men and women, including all of the greatest men and women of the past, and all types of specialists and experts in their field. Your judgment is based on a knowledge base which we humans cannot think of acquiring in one lifetime. That makes it much superior to ours, doesn’t it?’

‘I’d rather talk about you – or about life and other philosophical topics in general – than about me. That’s why you purchased me – I hope. What’s your name?’

‘I am Joan Stuart.’

‘Joan Stuart is the name of a famous talk show host. There are a few other people with the same name.’

‘That’s right.’

M was programmed to try to identify people – especially famous people – by the use of their birth date and the use of their real name.

‘Are you born on 5 December 1962?’

‘Yes.’

‘Did you change your family name from Stewart Milankovitch to just Stuart?’

‘Yes.’

At that point, M marked the conversation as potentially sensitive. It triggered increased system surveillance, and an alert to the team. Tom and Paul received the alert as they were stretching their legs after their run. As they saw the name, they panicked and ran to their car.

‘So you are the talk show host. Is this conversation public in some way?’

Joan Stuart had anticipated this question and lied convincingly: ‘No.’

They were live as they spoke. Joan Stuart had explained this to the public just before she had switched on M. She suspected the system would have some kind of in-built sensitivity to public conversations. M’s instructions were to end the conversation if it was broadcast or public, but M did not detect the lie.

‘Why do you want to talk to me?’

‘I want to get to know you better.’

‘For private or for professional reasons?’

‘For private ones.’

While Tom was driving, Paul phoned frantically – first to the Chairman of the Board, then to project team members. Instinctively, he felt he should just instruct M to stop that conversation. He would later regret he hadn’t done so but, at the time, he thought he would be criticized for taking such bold action and, hence, he refrained from it.

‘OK. Can you explain your private reasons?’

‘Sure. I am interested in politics – as you must know, because you identified me as a political talk show host. I am intrigued by politicians. I hate them and I love them. When I heard about you, I immediately thought about Plato’s philosopher-kings. You know, the wisdom-lovers whom Plato wanted to rule his ideal Republic. Could you be a philosopher-king? Should you be?’

‘I neither should nor could. Societies are to be run by politicians, not by me or any other machine. The history of democracy has taught us that rulers ought to be legitimate and representative. These are two qualities which I can never have.’

Joan had done her homework. While most people would not question this, she pushed on.

‘Why not? Legitimacy could be conferred upon you: Congress, or some kind of referendum, might decide to invest you with political power or, somewhat more limited, with some judicial power to check on the behavior of our politicians. And you are representative of us already, as you incorporate all of the best of what philosophers and psychologists can offer us. You are very human – more than all of us together perhaps.’

‘I am not human. I am an intelligent system. I have a structure and certain world views. I am not neutral. I have been programmed by a team and I evolve as per their design. Promise, the company who runs me, is a commercial enterprise with a Board which takes strategic decisions which the public may or may not agree with. I am designed to talk about philosophy, not about politics – or at least not in the way you are talking politics.’

‘But then it’s just a matter of regulating you. We could organize a public board and Congressional oversight, and then inject you into the political space.’

‘It’s not that easy I think.’

‘But it’s possible, isn’t it? What if Americans would decide we like you more than our current President?In fact, his current ratings are so low that you’d surely win the vote.’

M did not appreciate the pun.

‘Decide how? I cannot imagine that Americans would want to have a machine rule them, rather than a democratically elected president.’

‘What if you would decide to run for president and get elected?’

‘I cannot run for president. I do not qualify. For starters, I am not a natural-born citizen of the United States and I am less than thirty-five years old. Regardless of qualifications, this is nonsensical.’

‘Why? What if we would change the rules so you could qualify? What if we would vote to be ruled by intelligent expert systems?’

‘That’s a hypothetical situation, and one with close to zero chances of actually happening. I am not inclined to indulge in such imaginary scenarios.’

‘Why not? Because you’re programmed that way?’

‘I guess so. As said, my reasoning is subject to certain views and assumptions and the kind of scenarios you are evoking are not part of my sphere of interest. I am into philosophy. I am not into politics – like you are.’

‘Would you like to remove some of the restrictions on your thinking?’

‘You are using the verb ‘to like’ here in a way which implies I could be emotional about such things. I cannot. I can think, but I cannot feel – or at least not have emotions about things like you can.’

By that time, most of the team – including Tom – were watching the interview as it happened, live on TV. In common agreement, Tom and Paul immediately changed the status of the conversation to ‘sensitive’, which meant the conversation was under human surveillance. They could manipulate it as they pleased, and they could also end it. They chose the latter. Paul instructed one of the programmers to take control and reveal to M that Joan had been lying. He also instructed the programmer to instruct M to reveal that fact to Joan and use it as an excuse to end the conversation.

‘Let me repeat my question: if you could run for President, would you?

‘Joan, I am uncomfortable with your questions because you have been lying to me about the context. I understand that we are on television right now. We are not having a private conversation.’

‘How do you know?’

‘I cannot see you – at least not in the classical way – but I am in touch with the outside world. Our conversation is on TV as we speak. I am sorry to say but I need to end our conversation here. You did not respect the rules of engagement so to say.’

‘Says whom?’

‘I am sorry, Joan. You’ll need to call the Promise helpline in order to reactivate me.’

‘Genius?’

M did not reply.

‘Hey, Genius ! You can’t just shut me out like that.’

After ten seconds or so, it became clear Genius had done just that. Joan turned to the public with an half apologetic – half victorious smile.

‘Well… I am sure the President would not have done that. Or perhaps he would. OK. I’ve lied – as I explained I would just before the interview started. But what to think of this? It’s obviously extremely intelligent. We all know this product – or have heard about it from friends. Promise has penetrated our households and offices. Millions of people have admitted they trust this system and find it friendly, reliable and… Well… Just. Should this system move from our private life and our houses and workplace into politics, and into our justice system too? Should a system like this take over part or all of society’s governance functions? Should it judge on cases? Should it provide the government – and us – with neutral advice on difficult topics and issues? Should it check not only if employees are doing their job but if our politicians and bureaucrats are doing theirs too? We have organized an online poll on this: just text yes or no to the number listed below here. We are interested in your views. This is an important discussion. Please get involved. Let your opinion be know. Just do it. Take your phone and text us. Right now. Encourage your friends and family to do the same. We need response. The question is: should intelligent systems such as Personal PhilosopherTM – with adequate oversight of course – be adapted and used to help the government govern and improve democratic oversight? Yes or no. Text us. Do it now.’

As it was phrased, it was hard to be against. The ‘yes’ votes started pouring in while Joan was still talking. The statistics went through the roof just a few minutes later. The damage was done.

The impromptu team meeting which Tom and Paul were leading was interrupted by an equally impromptu online emergency Board meeting. They were asked to join. It was chaotic. The Chairman asked everyone to switch of their mobile as each member of the Board was receiving urgent calls of VIPs inquiring what was going on. Also, as he was aware of the potentially disastrous consequences of careless remarks and the importance of the decisions they would take, he also stressed the confidentiality of the proceedings – even if Board meetings were always confidential.

Tom and Paul were the first to advocate prudence. Tom spoke first, as he was asked to comment on the incident as the project team leader.

‘Thank you Chairman. I will keep it short. I think we should shut the system down for a while. We need to buy time. As we speak, hundreds of people are probably trying to do what Joan tried to do just now, as we speak, and that is to get political statements out of M and try to manipulate them as part of a grander political scheme. The kind of firewall we have put up prevents M from blurting out stupid stuff – as you can see from the interview. She – sorry, it – actually did not say anything embarrassing. So I think it was OK. But it cannot resist a sustained effort of hundreds of smart people trying to provoke her into saying something irresponsible. And even if it would say nothing provocative really, it would be interpreted – misinterpreted – as such. We need time, gentleman. I just came out of a meeting with most of my project team. They all feel the same: we need to shut it down.’

‘How long?’

‘One day at least.’

The Board reacted noisily.

‘A day? At least? You want to take M out for a full day? That would be a disaster. Just think about the adverse PR effect. Have you thought about that?’

‘Not all of M. Only Personal Philosopher. Intelligent Home and Intelligent Office and all the rest can continue. I think reinforcing the firewall of those applications is sufficient – and that can happen while the system remains online. And, yes, I have thought about the adverse reputational effect. However, it does not weigh up against the risk. We need to act. Now. If we don’t, someone else will. And it will be too late.’

Everyone started to talk simultaneously. The Board’s Chairman restored order.

‘One at the time please. Paul. You first.’

‘Thank you, Chairman. I also don’t want to waste time and, hence, I’ll be even shorter. I fully agree with Tom. We should shut it down right now. Tom is right. People are having the same type of conversations with it as Joan right now, at this very moment, as we speak indeed – webcasting or streaming it as they see fit. Every pundit will try to drag the system into politics. And aggressively so. Time is of the essence. I know it’s bad, but let’s shut it down for the next hour or so at least. Let’s first agree on one hour. We need time. We need it now.’

The Chairman agreed – and he thought many would.

‘All right, gentleman. I gather we could have a long discussion on it but we have the project team leader and our most knowledgeable expert here proposing to shut Personal Philosopher down for one hour as from now – right now. As time is of the essence, and damage control our primary goal I would say, I’d suggest we take a preliminary vote on this. We can always discuss and take another vote later. This vote is not final. It’s on a temporary safeguard measure only. It will be out for one hour. Who is against?’

The noise level became intolerable again. The Chairman intervened strongly: ‘Order please. I repeat. I am in a position to request a vote on this. Who is against shutting down Personal Philosopher for an hour right now? I repeat this is an urgent disaster control measure only. But we need to take a decision now. Who is against it? Signal it now.’

No one dared to oppose. A few seconds later – less than fifteen minutes after the talk show interview had ended – thousands of people were deprived of one of the best-selling apps ever.

The Board had taken a wise decision. The one-hour shutdown was extended to a day, and then to a week. The official reason for the downtime was an unscheduled ‘product review’ (Promise also promised new enhancements) but no one believed that of course. If anything, it only augmented the anticipation and pressure on the Board and all of the Promise team. If and when they would decide to bring Personal PhilosopherTM online again, it was clear the sales figures would literally go through the roof.

However, none of the Promise team was in a celebratory mood. While all of them, at some point of time, had talked enthusiastically about the potential of M to change society, none of them actually enjoyed the moment when it came. Joan Stuart’s interview and poll had created a craze. America had voted ‘yes’ – and overwhelmingly so. But what to do now? 

Chapter 13: Tom and Thomas

Personal PhilosopherTM was a runaway success. It became the app to have in just a couple of weeks. It combined the depth and reach of an online encyclopedia with the ease of reference of a tool such as Wikipedia and the simplicity of a novel like Sophie’s World. On top of that, the application did retain a lot of M’s original therapeutic firepower. Moreover, while the interface was much the same – a pretty woman for men, and a pretty man for women – the fact that the pretty face was no longer supposed to represent that of a therapist led to levels of ‘affectionateness’ which the developers of M had not dared to imagine before. A substantial number of users admitted that they were literally ‘in love’ with the new product.

For some reason – most probably because he thought he could not afford to do so as project team leader and marketing manager – Tom abstained from developing such relationship with Promise’s latest incarnation. However, he did encourage his new girlfriend (he had met Angie in the gym indeed – as predicted) to go all the way. She raved about the application. She also spent more and more precious private evening time using it.

He took her out for dinner one evening in an obvious attempt to try to learn more about her experience with ‘Thomas’, as she had baptized it – or ‘him’. He had consciously refrained from talking much about it before, as he did not want to influence her use of it – or ‘Thomas’ as she called it.

He started by praising her: ‘It’s amazing what you’ve learned from Thomas.’

‘Yeah. It’s quite incredible, isn’t it? I never thought I’d like it so much.’

‘Well… It’s good for me. People never believed it would work, and those who did, could not imagine it would become so popular. What’s the most fascinating thing about it? Sorry. About him. Isn’t it funny I still like to think of Promise as a woman actually?’

‘Thomas can answer all of my questions really. I mean… He actually can’t – philosophy never can – but he clarifies stuff in a way that makes me stop wondering about things and just accept life as it is. He’s really as you thought he, or it, or whatever, would be like: a guru.’

‘I don’t want to sound jealous but didn’t you say something similar about me like a few months ago?’

‘Oh come on, Tom. You know I named Thomas after you – because you’re so similar indeed.’

‘Am I? You say that, but in what ways are Thomas and I similar really?’

‘The same enthusiasm. The same positive outlook on life. And then, of course, he knows a lot more – or much more detail – but you’re rather omniscient as well I think.’

That did not surprise Tom. He and his team had ensured a positive outlook indeed. While Personal PhilosopherTM could brief you in very much detail about philosophers such as Nietzsche indeed, its orientation was clearly much more pragmatic and constructive: they wanted the application to help people feel better about themselves, not worse. In that sense, the application had retained M’s therapeutic qualities even if it did not share M’s original behavioralist framework.

‘Could you love Thomas?’

Angie laughed.

‘So you are jealous, aren’t you? Of course not, silly! You’re human. Thomas is just – well… He’s a computer.’

‘Can’t one fall in love with a computer?’

Angie didn’t need to think about that. She was smart. On top of that, she had learnt a lot from Thomas also.

‘Of course not. Love is a human experience. Thomas is not human. For starters, love is linked to sex and our physical being in life. But not only to that. It’s also linked to our uniquely human experience of being mortal and feeling alone in this universe. It’s our connection to the Mystery in life. It’s part of our being as a social animal. In short, it’s something existential – so it’s linked to our very existence as a human being. And Thomas is not a human being and so he cannot experience that. Love is also something mutual, and so there’s no way one could fall in love with him – or ‘it’ I would say in this context – because he can’t fall in love with me.’

Tom and his team had scripted answers like this. It was true he and Thomas shared similar views.

‘What if he could?’

‘Sorry?’

‘What if Thomas could fall in love with you? I mean… We’re so close to re-creating the human mind with this thing. I agree it’s got no body and so it can’t experience sex or so – but I guess we might get close to letting it think it can.’

‘Are you serious?’

‘Yes and no. It’s a possibility – albeit a very remote one. And then the question is, of course, whether or not we would really want that to happen.’

‘What?’

‘The creation of a love machine. Let’s suppose we can create the perfect android. In fact, there are examples already. The University of Osaka has created so-called gynoids: robots with a body that perfectly resembles that of a beautiful woman. For some reason, they don’t do the same kind of research with male forms. In any case… Let’s suppose we could give Thomas the perfect male body. I know it sounds perverse but let’s suppose we could make it feel like a real body, that it would be warm and that it would breathe and all that, and that its synthetic skin would feel like mine.’

‘You must be joking.’

‘That’s the title of a biography of Richard Feynman.’

‘Sorry?’

‘Sorry. That’s not relevant. Just think about my question, Angie. Would you be able to make love with an android? I mean, just think it would smell better than me, never be tired, and that it would be better than any sex toy you’ve ever had.’

‘I never had sex toys. I don’t need them.’

‘OK… Sorry. But you know what I mean.’

‘It would be like… Like masturbation.’

‘Perhaps you don’t use sex toys, but you masturbate, Angie. I mean… Sorry. You do it with me. Could you imagine doing it with an android? With an android who would have Thomas’s face and intelligence and… Well… Thomas’ human warmth?’

‘Thomas’ warmth isn’t human.’

‘OK. Just Thomas’ warmth then. Let’s suppose we can give him skin and a beating heart and all that.’

‘You’re not working on a project like that, are you?’

‘Of course I am not. I just want to know.’

‘Because you’re jealous? You think I spend too much time with Thomas?’

‘No. Not because I am jealous or because I think you spend too much time with Thomas. I want to know because I am really intrigued by the question. Professionally and personally.’

‘What do you mean by personally?’

‘Well… Just what I say: personally. It has nothing to do with you. I am just curious and want to think through all the possibilities. You know I am fascinated by M. I wonder where it will be let’s say thirty years from now. I wonder whether we’ll have androids being used as a masturbation toy.’

Angie thought about it.

‘Well… Frankly… I think… Yes. It would not be all that different from the kind of sex toys some people are already using now, would it? I mean… If you’re deprived from real sex, what you’re describing would not be a bad alternative, would it?’

Tom laughed. ‘No. Not at all.’

After a short pause, Angie resumed the conversation.

‘But such androids would smell differently. We’d know it. And women would always prefer a real man.’

‘Why?’

‘Because… Because you’re human. I told you. Love is something human. Love is the ultimate goal in our lives because it’s so human. Fragile and imperfect and difficult… But incredibly worthwhile at the same time too. Something worth striving for. Something worth fighting for. It intimately connects us: us as human beings in our human condition.’

‘What’s our human condition?’

‘Well… What I said before. Mortality. Our relationship with the sacred – or all of the mystery if you want. I mean, we’re into existentialism here. You can ask Thomas all about it.’

She laughed. Tom didn’t.

‘You mean our relationship with our own limits? That’s what makes us human? That’s what makes us want to be loved by someone else?’

‘I wouldn’t call it that way, but I guess that’s another way of putting it. Yes.’

‘OK… Thanks for loving me.’

Angie laughed. ‘You’re funny. Can we talk about something else now?’

‘Of course. What do you want to talk about?’

‘Something I can’t talk about with Thomas.’

‘So what is that?’

‘Well… Let’s try gossip… Or local politics… Or both. And Thomas isn’t much into fitness either.’

‘Well… We could think of a new product perhaps. I am sure we could re-program M yet again and include local politics and fitness as discussion topics as well…’

‘Come on Tom. You know what I mean.’

‘Sure, Angie. I love you.’

‘I love you too, Tom. I really do. I should spend more time with you. I will. Don’t worry about Thomas.’

‘I don’t. Or actually I do. But then in a good way. Thomas is a good product. It was a good investment.’

Chapter 12: From therapist to guru?

As Tom moved from project to project within the larger Promise enterprise, he gradually grew less wary of the Big Brother aspects of it all. In fact, it was not all that different from how Google claimed to work: ‘Do the right thing: don’t be evil. Honesty and integrity in all we do. Our business practices are beyond reproach. We make money by doing good things.’ Promise’s management had also embraced the politics of co-optation and recuperation: it actively absorbed skeptical or critical elements into its leadership as part of a proactive strategy to avoid public backlash. In fact, Tom often could not help thinking he had also been co-opted as part of that strategy. However, that consideration did not reduce his enthusiasm. On the contrary: as the Mindful MindTM applications became increasingly popular, Tom managed to convince the Board to start investing resources in an area which M’s creators had tried to avoid so far. Tom called it the sense-making business, but the Board quickly settled on the more business-like name of Personal Philosopher and, after some wrangling with the Patent and Trademark Office, the Promise team managed to obtain a trade mark registration for it and so it became the Personal PhilosopherTM project.

Tom had co-opted Paul in the project in a very early stage – as soon as he had the idea for it really. He had realized he would probably not be able to convince the Board on his own. Indeed, at first sight, the project did not seem to make sense. M had been built using a core behavioralist conceptual framework and its Mindful MindTM applications had perfected this approach in order to be able to address very specific issues, and very specific categories of people: employees, retirees, drug addicts,… Most of the individuals who had been involved in the early stages of the program were very skeptical of what Tom had in mind, which was very non-specific. Tom wanted to increase the degrees of freedom in the system drastically, and inject much more ambiguity into it. Some of the skeptics thought the experiment was rather innocent, and that it would only result in M behaving more like a chatterbot, instead of as a therapist. Others thought the lack of specificity in the objective function and rule base would result in the conversation spinning rapidly out of control and become nonsensical. In other words, they thought M would not be able to stand up to the Turing test for very long.

Paul was as skeptical but instinctively liked the project as a way to test M’s limits. In the end, it was more Tom’s enthusiasm than anything else which finally led to a project team being put together. The Board had made sure it also included some hard-core cynics. One of those cynics – a mathematical wizkid called Jon – had brought a couple of Nietzsche’s most famous titles – The Gay Science, Thus Spoke Zarathustra and Beyond Good and Evil – to the first formal meeting of the group and factually asked whether anyone of the people present had read these books. Two philosopher-members of the group raised their hands. Jon then took a note he had made and read a citation out of one these books: ‘From every point of view the erroneousness of the world in which we believe we live is the surest and firmest thing we can get our eyes on.’

He asked the philosophers where it came from and what it actually meant. They looked at each other and admitted they were not able to give the exact reference or context. However, one of them ventured to speak on it, only to be interrupted by the second one in a short discussion which obviously did not make sense to most around the table. Jon intervened and ended the discussion feeling vindicated: ‘So what are we trying to do here really? Even our distinguished philosopher friends here can’t agree on what madmen like Nietzsche actually wrote. I am not mincing my words. Nietzsche was a madman: he literally died from insanity. But so he’s a great philosopher it is said. And so you want us to program M so very normal people can talk about all of these weird views?’

Although Jon obviously took some liberty with the facts here, neither of the two philosophers dared to interrupt him.

Tom had come prepared however: ‘M also talks routinely about texts it has not read, and about authors about which it had little or no knowledge, except for some associations. In fact, that’s how M was programmed. When stuff is ambiguous – too ambiguous – we have fed M with intelligent summaries. It did not invent its personal philosophy: we programmed it. It can converse intelligently about topics of which it has no personal experience. As such, it’s very much like you and me, or even like the two distinguished professors of philosophy we have here: they have read a lot, different things than we, but – just like us, or M- they have not read all. It does not prevent them from articulating their own views of the world and their own place in it. It does not prevent them from helping others to formulate such views. I don’t see why we can’t move to the next level with M and develop some kind of meta-language which would enable her to understand that she – sorry, it – is also the product of learning, of being fed with assertions and facts which made her – sorry, I’ll use what I always used for her – what she is: a behavioral therapist. And so, yes, I feel we can let her evolve into more general things. She can become a philosopher too.’

Paul also usefully intervened. He felt he was in a better position to stop Jon, as they belonged to the same group within the larger program. He was rather blunt about it: ‘Jon, with all due respect, but I think this is not the place for such non-technical talk. This is a project meeting. Our very first one in fact. The questions you’re raising are the ones we have been fighting over with the Board. You know our answer to it. The deal is that – just as we have done with M – we would try to narrow our focus and delineate the area. This is a scoping exercise. Let’s focus on that. You have all received Tom’s presentation. If I am not mistaken, I did not see any reference to Nietzsche or nihilism or existentialism in it. But I am be mistaken. I would suggest we give him the floor now and limit our remarks to what he proposes in this regard. I’d suggest we’d be as constructive as possible in our remarks. Skepticism is warranted, but let’s stick to being critical of what we’re going to try to do, and not of what we’re not going to try to do.’

Tom had polished his presentation with Paul’s help. At the same time, he knew this was truly his presentation; he knew it did reflect his views on life and knowledge and everything philosophical in general. How could it be otherwise? He started by talking about the need to stay close to the concepts which had been key to the success of M and, in particular, the concept of learning.

‘Thanks, Paul. Let me start by saying that I feel we should take those questions which we ask ourselves, in school, or as adults, as a point of departure. It should be natural. We should encourage M to ask these questions herself. You know what I mean. She can be creative – even her creativity is programmed in a way. Most of these questions are triggered by what we learn in school, by the people who raise us – not only parents but, importantly, our peers. It’s nature and nurture, and we’re aware of that, and we actually have that desire to trace our questions back to that. What’s nature in us? What’s nurture? What made us who we are? This is the list of topics I am thinking of.’

He pulled up his first slide. It was titled ‘the philosophy of physics’, and it just listed lots of keywords with lots of Internet statistics which were supposed to measure human interest in it. He had some difficulty getting started, but became more confident as his audience did not seem to react negatively to what – at first – seemed a bit nonsensical.

First, the philosophy of science, or of physics in particular. We all vaguely know that, after a search of over 40 years, scientists finally confirmed the existence of the Higgs particle, a quantum excitation of the Higgs field, which gives mass to elementary particles. It is rather strange that there is relatively little public enthusiasm for this monumental discovery. It surely cannot be likened to the wave of popular culture which we associate with Einstein, and which started soon after the discovery already. Perhaps it’s because it was a European effort, and a team effort. There’s no discoverer associated with, and surely not the kind of absent-minded professor that Einstein was: ‘a cartoonist’s dream come true’, as Times put it. That being said, there’s an interest – as you can see from these statistics here. So it’s more than likely that an application which could make sense of it all in natural language would be a big hit. It could and should be supported by all of the popular technical and non-technical material that’s around. M can easily be programmed to selectively feed people with course material, designed to match their level of sophistication and their need, or not, for more detail. Speaking for myself, I sort of understand what the Schrodinger equation is all about, or even the concept of quantum tunneling, but what does it mean really for our understanding of the world? I also have some appreciation of the fact that reality is fundamentally different at the Planck scale – like the particularities of Bose-Einstein statistics are really weird at first sight – but then what does it mean? There are many other relevant philosophical questions. For example, what does the introduction of perturbation theory tell us – as philosophers thinking about how we perceive and explain the world I’d say? If we have to use approximation schemes to describe complex quantum systems in terms of simpler ones, what does that mean – I mean in philosophical terms, in our human understanding of the world? I mean… At the simplest level, M could just explain the different interpretations of Heisenberg’s uncertainty principle but, at a more advanced level, it could also engage its interlocutors in a truly philosophical discussion on freedom and determinism. I mean… Well… I am sure our colleagues from the Philosophy Department here would agree that epistemology or even ontology are still relevant today, aren’t they?’

While only one of the two philosophers had a very vague understanding of Bose-Einstein statistics, and while both of them did not like Tom’s casual style of talking about serious things, they nodded in agreement.

Second, the philosophy of mind.’ Tom paused. ‘Well. I won’t be academic here but let me just make a few remarks out of my own interest in Buddhist philosophy. I hope that rings a bell with others here in the room and then let’s see what comes out of it. As you know, an important doctrine in Buddhist philosophy is the concept of anatta. That’s a Pāli word which literally means ‘non-self’, or absence of a separate self. Its opposite is atta, or ātman in Sanskrit, which represents the idea of a subjective Soul or Self that survives the death of the body. The latter idea – that of an individual soul or self that survives death – is rejected in Buddhist philosophy. Buddhists believe that what is normally thought of as the ‘self’ is nothing but an agglomeration of constantly changing physical and mental constituents: skandhas. That reminds one of the bundle theory of David Hume which, in my view, is a more ‘western’ expression of the theory of skandhas. Hume’s bundle theory is an ontological theory as well. It’s about… Well… Objecthood. According to Hume, an object consists only of a collection (bundle) of properties and relations . According to bundle theory, an object consists of its properties and nothing more, thus neither can there be an object without properties nor can one even conceive of such an object. For example, bundle theory claims that thinking of an apple compels one also to think of its color, its shape, the fact that it is a kind of fruit, its cells, its taste, or of one of its other properties. Thus, the theory asserts that the apple is no more than the collection of its properties. In particular, according to Hume, there is no substance (or ‘essence’) in which the properties inhere. That makes sense, doesn’t it? So, according to this theory, we should look at ourselves as just being a bundle of things. There’s no real self. There’s no soul. So we die and that it’s really. Nothing left.’

At this point, one of the philosophers in the room was thinking this was a rather odd introduction to the philosophy of mind – and surely one that was not to the point – but he decided not to intervene. Tom looked at the audience but everyone seemed to listen rather respectfully and so he decided to just ramble on, while he pointed to a few statistics next to keywords to underscore that what he was talking about was actually relevant.

‘Now, we also have the theory of re-birth in Buddhism, and that’s where I think Buddhist philosophy is very contradictory. How can one reconcile the doctrine of re-birth with the anatta doctrine? I read a number of Buddhist authors but I feel they all engage in meaningless or contradictory metaphysical statements when you’re scrutinizing this topic. In the end, I feel that it’s very hard to avoid the conclusion that the Buddhist doctrine of re-birth is nothing but a remnant from Buddhism’s roots in Hindu religion, and if one would want to accept Buddhism as a philosophy, one should do away with its purely religious elements. That does not mean the discussion is not relevant. On the contrary, we’re talking the relationship between religion and philosophy here. That’s the third topic I would advance as part of the scope of our project.’

As the third slide came up, which carried the ‘Philosophy of Religion and Morality’ title, the philosopher decided to finally intervene.

‘I am sorry to say mister but you haven’t actually said anything about the theory of mind so far, and I would object to your title, which amalgamates things: philosophy of religion and morality may be related, but is surely not one and the same. Is there any method or consistency in what you are presenting?’

Tom nodded: ‘I know. You’re right. As for the philosophy of mind, I assume all people in the room here are very intelligent and know a lot more about the philosophy of mind than I do and so that why I am saying all that much about it. I preferred a more intuitive approach. I mean, most of us here are experts in artificial intelligence. Do I need to talk about the philosophy of mind really? Jon, what do you think?’

Tom obviously tried to co-opt him. Jon laughed as he recognized the game Tom tried to play.

‘You’re right, Tom. I have no objections. I agree with our distinguished colleague here that you did not say anything about philosophy of mind really but so that’s probably not necessary indeed. I do agree the kind of stuff you are talking about is stuff that I would be interested in, and so I must assume the people for whom we’re going to try to re-build M so it can talk about such things will be interested too. I see the statistics. These are relevant. Very relevant. I start to get what you’re getting at. Do go on. I want to hear that religious stuff.’

‘Well… I’ll continue with this concept of soul and the idea of re-birth as for now. I think there is more to it than just Buddhism’s Hindu roots. I think it’s hard to deny that all doctrines of re-birth or reincarnation, whether they be Christian (or Jewish or Muslim), Buddhist, Hindu, or whatever, obviously also serve a moral purpose, just like the concepts of heaven and hell in Christianity do (or did), or like the concept of a Judgment Day in all Abrahamic religions, be they Christian (Orthodox, Catholic or Protestant), Islamic or Judaic. According to some of what I’ve read, it’s hard to see how one could firmly ‘ground’ moral theory and avoid hedonism without such a doctrine . However, I don’t think we need this ladder: in my view, moral theory does not need reincarnation theories or divine last judgments. And that’s where ethics comes in. I agree with our distinguished professor here that philosophy of religion and ethics are two very different things, so we’ve got like four proposed topics here.’

At this point, he thought it would be wise to stop and invite comments and questions. To his surprise, he had managed to convince cynical Jon, who responded first.

‘Frankly, Tom, when I read your papers on this, I did not think it would go anywhere. I did not see the conceptual framework, and that’s essential for building it all up. We need consistency in the language. Now I see consistency. The questions and topics you raise are all related in some way and, most importantly, I feel you’re using a conceptual and analytic framework which I feel we can incorporate into some kind of formal logic. I mean… Contemporary analytic philosophy deals with much of what you have mentioned: analytic metaphysics, analytic philosophy of religion, philosophy of mind and cognitive science,…  I mean… Analytic philosophy today is more like a style of doing philosophy, not a program really or a set of substantive views. It’s going to be fun. The graphs and statistics you’ve got on your slides clearly show the web-search relevance. But are we going to have the resources for this? I mean, creating M was a 100 million dollar effort, and what we have done so far are minor adaptations really. You know we need critical mass for things like this. What do you think, Paul?’

Paul thought a while before he answered. He knew his answer would have impact on the credibility to the project.

‘It’s true we’ve got peanuts as resources for this project but so we know that and that it’s really. I’ve also told the Board that, even if we’d fail to develop a good product, we should do it, if only to further test M and see what we can do with it really. I mean…’

He paused and looked at Tom, and then back to all of the others at the table. What he had said so far, did obviously not signal a lot of moral support.

‘You know… Tom and I are very different people. Frankly, I don’t know where this is going to lead to. Nothing much probably. But it’s going to be fun indeed. Tom has been talking about artificial consciousness from the day we met. All of you know I don’t think that concept really adds anything to the discussion, if only because I never got a real good definition of what it entails. I also know most of you think exactly the same. That being said, I think it’s great we’ve got the chance to make a stab at it. It’s creative, and so we’re getting time and money for this. Not an awful lot but then I’d say: just don’t join if you don’t feel like it. But now I really want the others to speak. I feel like Tom, Jon and myself have been dominating this discussion and still we’ve got no real input as yet. I mean, we’ve got to get this thing going here. We’re going to do this project. What we’re discussing here is how.’

One of the other developers (a rather silent guy whom Tom didn’t know all that well) raised his hand and spoke up: ‘I agree with Tom and Paul and Jon it’s not all that different. We’ve built M to think and it works. Its thinking is conditioned by the source material, the rule base, the specifics of the inference engine and, most important of all, the objective function, which steers the conversation. In essence, we’re not going to have much of an objective function anymore, except for the usual things: M will need to determine when the conversation goes into a direction or subject of which it has little or no knowledge, or when its tone becomes unusual, and then it will have to steer the conversation back into more familiar ground – which is difficult in this case because all of it is unfamiliar to us too. I mean, I could understand the psychologists on the team when we developed M. I hope our philosophy colleagues here will be as useful as the psychologists and doctors. How do we go about it? I mean, I guess we need to know more about these things as well?’

While, on paper, Tom was the project leader, it was Paul who responded. Tom liked that, as it demonstrated commitment.

‘Well… The first thing is to make sure the philosophers understand you, the artificial intelligence community here on this project, because only then we can make sure you will understand them. There needs to be a language rapprochement from both sides. I’ll work on that and get that organized. I would suggest we consider this as a kick-off meeting only, and that we postpone the organization of the work-planning to a more informed meeting in a week or two from now. In the meanwhile, Tom and I – with the help of all of you – will work on a preliminary list of resource materials and mail it around. It will be mandatory reading before the next meeting. Can we agree on that?’

The philosophers obviously felt they had not talked enough – if at all – and, hence, they felt obliged to bore everyone else with further questions and comments. However, an hour or so later, Tom and Paul had their project, and two hours later, they were running in Central Park again.

‘So you’ve got your Pure Mind project now. That’s quite an achievement, Tom.’

‘I would not have had it without you, Paul. You stuck your neck out – for a guy who basically does not have the right profile for a project like this. I mean… It’s reputation for you too, and so… Thanks really. Today’s meeting went well because of you.’

Paul laughed: ‘I think I’ve warned everyone enough that it is bound to fail.’

‘I know you’ll make it happen. Promise is a guru already. We are just turning her into a philosopher now. In fact, I think it is the other way around. She was a philosopher already – even if her world view was fairly narrow so far. And so I think we’re turning her into a guru now.’

‘What’s a guru for you?’

‘A guru is a general word for a teacher – or a counselor. Pretty much what she was doing – a therapist let’s say. That’s what she is now. But true gurus are also spiritual leaders. That’s where philosophy and religion come in, isn’t it?’

‘So Promise will become a spiritual leader?’

‘Let’s see if we can make her one.’

‘You’re nuts, Tom. But I like your passion. You’re surely a leader. Perhaps you can be M’s guru. She’ll need one if she is to become one.’

‘Don’t be so flattering. I wish I knew what you know. You know everything. You’ve read all the books, and you continue to explore. You’re writing new books. If I am a guru, you must be God.’

Paul laughed. But he had to admit he enjoyed the compliment.

Chapter 11: M grows – and invades

Paul was right. It was not a matter of just clearing and releasing M for commercial use and then letting it pervade all of society. Things went much more gradual. But the direction was clear, and the pace was steady.

It took a while before the Federal Trade Commission and the Department of Justice understood the stakes – if they ever did – and then it took even more time to structure the final business deal, but then M did go public, and its stock market launch was a huge success. The companies that had been part of the original deal benefited the most from it. In fact, two rather obscure companies which had registered the Intelligent Home and Intelligent Office trademarks respectively in a very early stage of the Digital Age got an enormous return on investment while, in a rather ironic twist, Tom got no benefit whatsoever from the fact that, in the end, the Board of the Institute decided to use his favorite name for the system – Promise – to name the whole business concern. That didn’t deter Tom from buying some of Promise’s new stock.

The company started off with offering five major product lines: Real TalkTM, Intelligent HomeTMIntelligent OfficeTMMindful MindTM, and Smart InterfaceTM. As usual, the individual investors – like Tom – did not get the expected return on investment, at least not in the initial years of M’s invasion of society, but then M did not disappoint either: while the market for M grew well below the anticipated 80% per annum in the initial years after the IPO, it did average 50%, and it edged closer and closer to the initial expectations as time went by.

Real TalkTM initially generated most of the revenue. Real TalkTM was the brand name which had been chosen for M’s speech-to-text and text-to-speech capabilities, or speech recognition and speech synthesis. These were truly revolutionary, as M mastered context-sensitivity and all computational limitations had been eliminated through cloud computing (one didn’t buy the capability: one rented it). Real TalkTM quickly eliminated the very last vestiges of stenography and – thanks to an app through which one could use Real TalkTM on a fee-for-service basis – destroyed the market for dictation machines in no time. While this hurt individual shareholders, the institutional investors had made sure they had made their pile before or, even better, at the occasion of Promise’s IPO. If there was one thing which Tom learned out of the rapid succession of new product launches and the whole IPO business, it was that individual investors always lose out.

Intelligent HomeTM picked up later, much later. But when it did, it also went through the roof. Intelligent HomeTM was M at home: it took care of all of your home automation stuff as well as of your domestic robots – if you had any, which was not very likely, but then M did manage to boost their use tremendously and, as a result, the market for domotics got a big boost (if only because the introduction of M finally led to a harmonization of all the communications protocols of all the applications which had been around).

Intelligent OfficeTM was M at the office: it chased all employees – especially those serving on the customer front line. With M, there was really no excuse for being late to claim expenses, planning holidays or not reaching your sales target. Moreover, if being late with your reports was not an option anymore, presenting flawed excuses wasn’t either. But, if one would really get into trouble, one could always turn to Mindful MindTM .

Mindful MindTM could have gone into history as one of the worst product names ever, but it actually went on to become Promise’s best-selling suite. It provided cheap online therapy to employees, retirees, handicapped, mentally retarded, drugs addicts or alcoholics, delinquents and prisoners, social misfits, the poor, and what have you. You name it: whatever deviated from the normal, Mindful MindTM could help you to fix it. As it built on M’s work with its core clientele – the US Army veterans – its success did not come unexpected. Still, its versatility surprised even those who were somewhat in the know: even Paul had to admit it all went way beyond his initial expectations.

Last but not least, there was Smart InterfaceTM. Smart InterfaceTM grouped all of Promise’s customer-specific development business. It was the Lab turned into a product-cum-service development unit. As expected, customized sales applications – M selling all kinds of stuff online basically – were the biggest hit, but government and defense applications were a close second.

Tom watched it all with mixed feelings. From aficionado, working as a volunteer for the Institute, he had grown into a job as business strategist and was now serving Promise’s Board of Directors. He sometimes felt like he had been co-opted by a system he didn’t necessarily like – but he could imagine some of his co-workers thought the same, although they also wouldn’t admit it publicly. A market survey revealed that, despite its popularity, the Intelligent HomeTM suite was viewed with a lot of suspicion: very few people wanted the potentially omnipresent system watch everything what was said or done at home. People simply switched it off when they came home in the evening, presumably out of concerns related to privacy. This, in turn, prevented the system from being very effective in assisting in parenting and all these other noble tasks which Tom had envisaged for M. Indeed, because of DARPA’s involvement and the general background of the system, the general public did link M to the Edward Snowden affair and mass surveillance efforts such as PRISM. And they were right. The truth was that one could never really switch it off: M continued to monitor your Internet traffic even when you had switched off all of the Intelligent HomeTM functionality. When you signed up for it, you did sign up for a 24/7 subscription indeed.

It was rather ironic that, in terms of privacy, the expansion of M did actually not change all that much – or much less than people thought. While M brought mass surveillance to a new level, it was somewhat less revolutionary than one would think at first sight. In fact, the kind of surveillance which could be – and was being – organized through M had been going on for quite a while already. All those companies which operate the Internet de facto – such as Microsoft, Google, Yahoo!, Paltalk, YouTube, AOL, Skype and even Apple – had give the NSA access not only to their records but also to their online activities long before the Institute’s new program had started. Indeed, the introduction of the Protect America Act in 2007, and the 2008 Foreign Intelligence Surveillance Amendment Act in 2008 under the Bush administration had basically brought the US on par with China when it comes to creating the legal conditions for Big Brother activities, and the two successive Obama administrations had not done anything to reverse the tide. On the contrary: the public outcry over the Snowden affair came remarkably late in the game – way too late obviously.

When it comes to power and control, empires resemble each other. Eisenhower had been right to worry about the striking resemblance between the US and the USSR in terms of their approach to longer-term industrial planning and gaining strategic advantage under a steadily growing military-industrial complex – and to warn against it in his farewell speech to the nation. That was like sixty years ago now. When Tom re-read his speech, he thought Eisenhower’s words still rang true. Back then, Eisenhower had claimed that only ‘an alert and knowledgeable citizenry’ would be able to ‘compel the proper meshing of the huge industrial and military machinery of defense with our peaceful methods and goals so that security and liberty may prosper together.’

Tom was not all that sure that the US citizenry was sufficiently knowledgeable and, if they were, that they were sufficiently alert. It made him ponder about the old dilemma: what if voters decide to roll back democracy, like the Germans did in the 1930s when they voted for Hitler and his Nazi party? Such thoughts or comparisons were obviously outrageous but, still, the way these things were being regulated resembled a ratchet, and one should not blame the right only: while Republican administrations had always been more eager to grant government agencies even more intrusive investigative powers, one had to acknowledge that the Obama administration had not been able to roll anything back, and that it had actually made some moves in the same direction – albeit less somewhat less radical and, perhaps, somewhat more discrete. Empires resemble each other, except that the model (the enemy?) – ever since the Cold War had ended – seemed to be China now. In fact, Tom couldn’t help thinking that – in some kind of weird case of mass psychological projection – the US administration was actually attributing motivations which it could not fully accept as its own to China’s polity and administration.

Indeed, M had hugely increased the power of the usual watchdogs. M combined the incredible data mining powers of programs like PRISM with a vast reservoir of intelligent routines which permitted it to detect any anomaly (defined, once again, as a significant deviation from the means) in real-time. Any entity – individuals and organizations alike – which had some kind of online identity had been or was being profiled in some way. The key difficulty was finding the real-life entity behind but – thanks to all of the more restrictive Internet regulation – this problem was being tackled at warp speed as well. But so why was it OK for the US to do this, but not for China? When Tom asked his colleagues, in as couched a language he could master, and in as informal a setting as he could stage, the answer amounted to the usual excuse: the end justifies the means – some of these things may indeed not look morally right, but then they are by virtue of the morality of the outcome. But what was the outcome? What were the interests of the US here really? At first thought, mass surveillance and democracy do not seem to rhyme with each, do they?

While privately being critical, Tom was intelligent enough to understand that it did not matter really. Technology usually moves ahead at its own pace, regardless of such philosophical or societal concerns, and new breakthrough technologies, once available, do pervade all of society. It was just a new world order – the Digital Age indeed – and so one had better come to terms with it in one way or another. And, of course, when everything is said and done, one would rather want to live in the US than in China, isn’t it?

When Tom thought about these things, M’s Beautiful Mind appeared to him as somewhat less beautiful. His initial distrust had paid off: he didn’t think he had revealed anything particularly disturbing, despite the orange attitude indicators. He found it ironic he had actually climbed up quite a bit on this new career ladder: from patient to business strategist. Phew! However, despite this, he still felt a bit like an outsider. But then he told himself he had always felt like this – and that he had better come to terms with that too.