There are actual living things we have difficulty proving are "conscious" and you get into really tricky territory trying to establish what might be "conscious" (or even "alive") in the world even without bringing AI into the mix. Even the people around us we can't prove to be conscious except in that we are also human and assume they have a similar first-person (subjective) view of the world and aren't just a biological robot running equations. Yes, you could literally be the only conscious being in the universe and the universe would indistinguishable from one with many consciousnesses.
People dismiss it often, and it has valid academic criticisms, but most "lay" critiques I've seen seem to dismiss it because it gives things they don't consider to be "conscious like them" consciousness.
I think in general, we overexagerate qualities that humans have as uniquely special and important (intelligence, sentience, consciousness), even if the definition of them is fuzzy and any non-fuzzy definition is "too inclusive". I wonder if this is because we are collectively Identity building and creating an "other" out of nature, because otherwise we'd
1. Be much less special as we want to fee
2. Would have to consider a lot of inconvenient externalities for our reasoning (moral and otherwise) to be consistent
And both are slowdowns when you are bootstrapping a post scarcity society out of scarcity, so it'd culturally valuable to reify special qualities that we just determine "we" possess and "they" don't - because it's easier to unify on and allows more actions than making sure everyone can deal with the unfiltered reality of human... unremarkableness in an uncaring world, another social species like so many other with a temporary oligarchy on the planet (without wanting to drone to philosophically, I do think this aspect of Weltschmerz is underappreciated, especially seeing the anxiety amongst my peer group when the topic comes up)
> People dismiss it often, and it has valid academic criticisms, but most "lay" critiques I've seen seem to dismiss it because it gives things they don't consider to be "conscious like them" consciousness.
If your attempt at a formalism provides useful, actionable implications, and only really disagrees with people's intuitions on cases that are generally believed to be edge cases, or maybe that you also have good reason to believe are edge cases on different ground, then you might have a decent definition.
If your attempt at a formalism provides useful implications, but disagrees with everyone's intuitions on what the words you're using mean, you've probably defined the wrong thing. That's not to say that you aren't defining something sensible (you might still be), but strongly suggests it's not what you want it to be.
Selling IIT as a description of consciousness is roughly the same flavor of hubris as defining berries to include oranges and cucumbers and exclude raspberries and raspberries.
> Selling IIT as a description of consciousness is roughly the same flavor of hubris as defining berries to include oranges and cucumbers and exclude raspberries and raspberries.
Not trying to defend IIT, but the only reason your example works here is because from our human perspective (eating these plants) we find it absurd to include cucumbers and exclude raspberries.
So talking about hubris: Wouldn't plants (if they could) very likely have a totally different perspective? To then it would probably seem totally absurd that we use taste or shape of the fruit as a grouping feature and not something that might be totally more defining that plants can experience.
Of course it is intuitively understandable to look at ourselves and animals we understand when defining consciousness. Even like that it is hard to define. Trying to incorporate things that show conscious-like behaviors, but in totally human-unrelatable ways is not a bad next step, but it is hard to know where to draw the border. A rock could have conscious processes running in it on geological timescales — how would one falsify that?
You have acknowledged that there are valid academic criticisms of IIT, and for those wanting to get an idea of what those might be, a good place to start is with Scott Aaronson's responses in his blog: https://scottaaronson.blog/?p=1799
Note that the issue that you are concerned with here and the usefulness of IIT are separate concerns, and critics like Aaronson are not taking that position on the basis of the attitudes you claim are behind many 'lay' critiques.
I appreciate Aaronsons critique, he himself makes my main point of disagreements, i.e. that when constructing a physical systems that has the properties he describes (modified Vandermonde Matrix) it might give nontrivial surprises.
But even accepting that IIT is wrong in the "consciousness might imply high IIT score, but not vice versa" sense, I'm simply not aware of any definition which is as *attackble* because of it's formal specification.
This is not me going "well you don't have anything better", but me asking to be pointed to similarly strictly defined notions of consciousness
I found the conversation between you two interesting (as well as Scott Aaronson's post), so I figured I could contribute with a potentially relevant piece of reading: "Hard Criteria for Empirical Theories of Consciousness" (24-page PDF) at https://digitalcommons.chapman.edu/cgi/viewcontent.cgi?artic...
In the same ballpark as IIT, I've seen many mentions of GWT/GNT (global neuronal workspace theory). But the PDF covers that and many others. I don't have any particular "conclusion" myself, but it's just a fascinating discussion to be swimming in even when utterly clueless.
I am not aware of anything similarly strictly defined, but that is not, of course, evidence for the correctness or even usefulness of IIT, and nor does it somehow counter the specific objections that Aaronson raises.
IIT is a nice intuition pump, but it's missing essential parts. Consciousness should be related to survival, replication of information and its own energy costs. It's not just some integrated information for no purpose at all. The purpose is survival if it is conscious.
If you add that to IIT, then you suddenly get a new filter for what could be conscious - something that competes for information replication, not just something with high IIT score.
Is that not just your intuition too? Is it not possible to imagine an artificial consciousness that does not care one way or the other for its own survival?
No because survival is closely tied to evolution. By mere survival an agent creates its own structure and values. It can't evolve if it doesn't risk its skin in the game. The price of our present day intelligence has been very high.
But you can piggyback AI training on human culture and embody the agent in our environment so it can still function like a conscious biological agent. The AI is just passing the buck to humans, without us it wouldn't even have a goal.
On the other hand all living things have goals without human help. Maybe when AIs can self replicate their information into the future like us, without external help, they could be eventually conscious. We don't know yet how to make physical self replicators that evolve. Only nature solved this riddle so far.
BTW, AlphaGo was subjected to evolutionary techniques, and it shows. The agents had to duel with clones and past versions of themselves. Survival was conditioned on win rate. Suddenly AI can surpass humans in this little environment, but that came at the price of discarding so many unfit earlier versions.
That's probably because the rest of researchers ignore it like the plague. If everybody had to comment on it for some magical reason, it would be probably be burned down to the ground and then some.
IIT doesn't capture much or any of what we consider important about consciousness. If it's the most coherent definition we have, we don't have a coherent definition.
> Yes, you could literally be the only conscious being in the universe and the universe would indistinguishable from one with many consciousnesses.
No, you think that's possible but it's naive. Your existence depends on environment and self replication of information. Your structure depends on it, your cognition. Are you saying you can be separate from the tree, and everything just a fake around you?
I’m saying there’s nothing you can measure to say anything — even another human — has the same sort of subjective experience (consciousness, a soul, whatever) that you’re experiencing right now, as you read this comment from inside your own head. You can make a guess, but it’s impossible to truly know. (This is a bit different from solipsism, if that’s what’s tripping you up. Although that, too, is a bit of a problem…) So any attempt to “prove” an AI is conscious is either going to fall into the hole of “well, how do you prove anything is conscious?” or is going to need a heavily distorted definition of “conscious” (like “seems kinda human” or “makes uncanny decisions we can’t fully explain”) to be true.
I think we can say that AI is conscious if we can observe its stack-trace showing it creating symbolic representations of its own processing and then processing those and using the same language for that representation as it uses for communications with humans.
In other words if we can observe a machine thinking about its own (symbolic) thinking, then I think it's conscious.
Consciousness is the ability to observe one's own mental processes and make sense of them symbolically. That is one possible definition of it.
You are right that we can't really prove that humans are conscious, because we cannot observe their thinking.
But, to prove that a machine is conscious all we need is to agree on a definition of what it means to be conscious, because machines can be easily inspected. And then find a machine that demonstrates features that fit the definition.
If we can't agree on what it means for something to be conscious then the whole question of whether some AI is or could be conscious is of course meaningless.
Consciousness is such a loaded word, and means something different depending on who you are talking to. For instance surgeons & anesthesiologists have a very peculiar definition of what it means to be conscious(ie conscious sedation); regular people might not agree it meets the minimum criterion of being awake and aware of yourself. I think a more important goal if we are designing artificial life might be whether or not an AI is conscientious(like morally).
> I think a more important goal if we are designing artificial life might be whether or not an AI is conscientious(like morally).
I have the feeling that attempts to create conscientious AI will just result in paperclip maximizers that have to use rationalization as an indirection layer to engage in their paperclip maximizing behavior.
Not much of an improvement, but realistically, what other result would be tolerated by whoever is building a DWIM AI to run a paperclip factory and for whom conscientiousness would just be a regulatory box to tick?
I'd be very skeptical of claims that, say, some complex ongoing control system, like a self-driving car, could be "conscious". But there's an argument someone could make.
But an artifact that has no data storage seems to fail any reasonable definition immediately. Maybe it could be part of something else that you can claim is conscious if you add storage, output controls, aims or whatever. But by itself the claim just seems preposterous.
Of course it can. Your laptop “knows” all sorts of stuff about itself and has all sorts of self-reflective processes that trigger without your specific input. Mine gives me warnings when my hard drive is almost full without me having to ask. This does not make it conscious.
But the GP is talking about specific properties of large language models like GPT-3. A laptop is indeed a self-maintaining, self-monitoring system, at a low level. A large language model isn't that. It is just a map from input to output and doesn't keep track of it's previous output or input.
You’re wading into epistemology, a topic that’s been discussed and debated for millennia. Which maybe relates to my larger point that the concept of “consciousness” quickly gets out of hand to the point where it seems to me a little silly to say neural nets are “slightly conscious.”
(in an ironic demonstration of what I mean) I'm not sure I understand what you're saying - but is it that you can only call a process 'thinking' if it's producing some output that's not derivable from its input, nor hard-coded in the definition of the function, as it were?
Perhaps that's way off, I was just starting to think along those lines as I came to your comment, and it seemed it might fit, that we might be thinking along the same lines.
Just as dried up Tardigrades are not alive (reintroduced into an environment in which they can survive) Artificial Intelligence which runs decoupled from any notion of time and whos weights are frozen or changed without it being able to influence or understand the outcomes will not be able to derive a theory of mind about itself. Even if it did since it does contributes little to it's own objective regularization will remove these features. As ANN are trained today most architectures are dried up Tardigrades which get watered, dried&reset into the previous configuration during inference.
Right? Consciousness involves agency, self reflection, motivations, a sense of urgency. And a good deal of "theory of mind" and self denial.
I'll give it to cats and dogs, birds, the apes, and many other critters, but not AI. Those things are separate developmental tracks, and not included in AI research, from what I can see. They aren't going to spontaneously arise, and it seems to me that they all came before we achieved sentience at all.
Wouldn't that mean all humans cannot reflect about themselves? Even our heartbeat is a feed of data. To be especially smartassed the environmental conditions of the brain are essentially an indirect form, as temperature alone affects reaction rates. In absense of data and trying any random value a reproduction would fail in the same way trying to make an ice sculpture with water at 25°C would.
That's the problem. The neural net can't reflect on itself because it doesn't have a body and an environment to act in. It should be able to experience consequences of previous actions to close the loop on itself.
I highly encourage anyone who is interested in the subject to listen through John Searle's lectures on the philosophy of mind [1]. He taught a course at UC Berkeley on the subject for many years, and one of the years was recorded and uploaded to Youtube.
He walks through the history of the subject from Descartes onward and spends a lot of time about how philosophers got very excited during the 1960s and 70s as computer science was just taking off because it seemed that at long last we had a new way to explain consciousness. This idea is that the brain is just a kind of computer and consciousness is just doing a certain kind of computation. If we wrote the right program and ran it on a sufficiently powerful computer, we could make other kind of computers conscious as well.
But Searle is most famous for the Chinese Room thought experiment which is (in my opinion) a fairly decisive counterargument against this position. That, along with the What Mary Knew thought experiment and Nagel's What Is It Like to Be a Bat together made the "consciousness is computation" position increasingly untenable.
It took about 40 years, but by the time these lectures were recorded it seems that the tide has really turned among philosophers of mind (Searle has his own biases but he tries to be impartial in explaining all sides). At least among philosophers who specialize in the subject it's hard to find many who seriously believe that a GPU running Resnet is conscious.
Obviously there's a whole semester's worth of material in those lectures and the arguments can't really be summed up in an HN comment, but anyone who is interested in thinking about these questions should really give the whole course a listen.
In some more "rational"-oriented circles, the Chinese Room argument is seen as obviously wrong, outdated and mysticistic, for example see Dan Dennett.
The Chinese Room would need to be so enormous, and it would work so slowly, that it's not a good intuitive analogy. It also introduces a homunculus actor that reads, interprets and understands the instructions, which muddles the analogy and focuses our attention to the conscious quality of the person following the instructions.
The notion that the Chinese room is 'mysticistic' is one of the most bewildering responses to it to the point where I don't think people who make that argument actually understood Searle.
Searle, simplified holds that consciousness is a property of very particular biochemical systems, not a disembodied thing that exists independent of the matter it inhibits.
It's the rational-oriented circles ironically enough who almost dabble in a Cartesian Dualism by positing "computation" or "minds" as distinct from matter. In fact with on occasion almost religious-sounding implications.
Come on, really? You think there's a significant subset of rationalists who are quasi-Dualists? Not just people who are bog-standard materialists who are just being sloppy with language when talking about emergent conscious systems because they don't care about or actively disdain most philosophy?
On the other hand, Searle confidently states that a perfect quantum mechanical simulation of a human brain would not be conscious. This is because it somehow lacks the intentional states of the brain it's perfectly simulating. He doesn't feel the need to justify that a human mind has these states, because our daily direct experience of consciousness cannot seriously be in question. The daily direct experience of consciousness that a perfect simulation of a human brain would assert that it feels is, somehow, different and we can't trust it.
At every step in his argument there's caveats, misleading allegory, and dismissals that are so much of a stretch it seems pretty obvious he's retroactively arguing from a position that our biological conscious minds are somehow special and mystic.
I agree that Searle is often too strong in his claims, but here is what I think people need to take away from the Chinese Room and why it is useful.
That solving a problem, or an array of problems that a conscious mind can solve may be perfectly able to be solved by a non-conscious machine. That the consciousness part may not be relevant in the problem-solving process, and that consciousness may be unique to living beings who have distinct physiology, anatomy, ways to process information, that computers as they exist now do not have. That a simulation is not necessarily the same thing as the thing it simulates, and does not have all of its properties.
These are good questions none of which are mystic and which following from Searle's argument. And I think you're really underestimating to what degree people are lured in by the 'magic' of neural nets because else we wouldn't be in this thread. A neural net that does some function approximation is not 'slightly conscious' unless we're going full in on panpsychism. Even from a functionalist perspective, the neural net performs nothing that is reasonably involved in what we consider to be responsible for consciousness. Nobody ever claimed a TI-83 is 'slightly conscious', or that Stockfish is merely because it plays exceptional chess.
The Chinese Room argument might have a number of problems, but it can be simplified. Put someone in a room who cannot see colours (an achromatope), and give them three different colour filters in the form of plastic sheets. A person outside the room passes sheets of paper of known colour into the room, the achromatope puts them under the colour filters, thereby determining each paper's colour, which he tells the person outside the room. The person outside the room thinks there's someone in the room who can see colours. But achromatopes can't see any colours, and will readily tell you that they have no idea what it's like to see colours.
If you think it's very different, can you tell me when, in people or animals that we'd agree are conscious, that there's consciousness but no experience of qualia, or experience of qualia without consciousness?
Experience of a stream of qualia is how I'd define consciousness. This is also consistent with the medical definition (leaving aside the other minds problem). If you have a different working definition, how does it differ from mine?
These are all vague terms that ultimately don't seem to map to anything grounded in reality.
But consciousness has something to do with seeing oneself as an agent in control, not simply executing an automatic script but being in control and knowing that one is in control, seeing the self also from the outside in a way and freely moving between these levels. I don't know if consciousness is possible without qualia. And anyone who claims to know probably also knows how many angels can dance on the head of a pin.
Yeah, there are definitely some holdouts for the functionalist position like Dennett. Searle goes through Dennett's objections in detail in the lectures.
Lecture 6 starting at about 22:00 is the Chinese room argument. https://youtu.be/zLQjbACTaZM?t=1322 Systems reply starts at 34:00. To be honest, I haven't found the detailed treatment of the counterarguments that you promised in another comment. His counterargument to the systems reply is that he's flabbergasted, and "it's not gonna work if you think about it". Then, of course, he gives the "what if I memorize the rulebook and do it all in my head" spiel. If you're committed to the systems reply, the man and the stuff in his head etc. is still a system and the systems reply holds just as much as before. It's fun indeed to hear it from the man himself, but from his tone it's abundantly clear that he is just not willing, or perhaps able, to give it serious consideration.
My completely naive reply to the Chinese room is that as stated doesn't learn from the inputs and outputs.
Let's modify it slightly to include a second stream of responses to the room's answers. Then make a guidebook that adds new rules based on the growing history of interaction.
Now the rulebook doesn't just compute a 1-1 map of input to output: each interaction updates not only its experience, but all present and future answers. An input to the system invokes memory, taps into experience, evaluates nuance.
Given lots of time and interaction and a good broad set of questions and responses, does the system understand Chinese? If it does not, then what is missing between it's understanding and ours?
A Resnet is not conscious, but one could argue that an RL agent, by some definitions of consciousness, is conscious.
If the measure of consciousness is the ability to have a subjective experience, then that requires that the object has an internal state that is modified, thus the same experience in different agents results in different modifications, ergo, it is subjective.
If a requirement for conscious experience is the capacity for self reference, then Hofstadter has already argued that that can be achieved with very simple rules of inference and natural numbers.
An RL agent that can plan ahead and take into consideration its own internal state could be argued to be conscious, by even stricter means that require that the agent is aware of its own existence.
To me, it looks as if humanity is trying to find ways to argue that it is anything but meatware. But I guess accepting that we are just computational units implies that agency is not a thing, and humanity isn't ready for this conversation.
For many people, this is a transcendental, almost religious claim, so no amount of technical discussion will convince them.
Essentially, consciousness has taken on the role of the soul as the concept on which we hang our humanistic ideals, like the protection of human dignity and human life, human rights etc. Many people see this threatened when people say consciousness isn't special and human and instead can be possessed by silicon. That's what you are up against, when you argue in this topic.
The Chinese Room is a really, really bad thought experiment. It's just an attempt at misdirection. It asks you to imagine a computer (one that operates on slips of paper according to rules that are written down in a big book), puts a human inside that computer, and then says that because the human doesn't understand what the computer program is doing, the computer program isn't conscious.
But this is just question-begging. The experiment purports to resolve the question "can a computer be conscious", and answers it by showing you a computer and claiming that it is not conscious.
Worse, the Chinese Room is straight-up Cartesian dualism. By placing the human inside the computer, Searle is preying on your Cartesian intuition that there must be a theatre of the mind where reality is experienced. And since Searle's homunculus doesn't understand Chinese, nothing can be experienced, therefore no conscious experience is happening in this system.
If I'm wrong, and the Chinese Room thought experiment is conclusive, it must be possible to reformulate it without the human. After all, the human does nothing but follow rules in a purely mechanistic fashion. Can anyone offer a reformulation of the experiment, that has no human, that they still feel is convincing?
You raise good questions, but I would really suggest listening through the lectures because Searle addresses all your points in a lot of detail. (It's a lot of material, but it's well worth it and he's an entertaining lecturer.) It's actually a bit funny that you bring up Cartesian dualism because Searle is pretty clear from the outset that he's of the opinion that Descartes's dualism was the worst thing ever to happen to philosophy. He certainly has no truck with it.
The question Searle is raising with the thought experiment is where the understanding of Chinese is happening. It's clear that the understanding is not happening with the homunculus. But if not there, then where is it happening? Some functionalists responded with the "systems response" which is that the system of the room plus the homunculus is what has the understanding even though no element of the system taken individually has understanding. He goes through a pretty detailed reply to the systems response, but the basic argument against it is that it's incoherent. There's no well defined thing that is the system. You can remove the man from the room and put him under a tent in a field. Where does the system end and the rest of the world begin?
You are of course correct that the experiment can be reformulated without the human, but that's exactly the point. A computer is mechanistically doing exactly the same thing as the man in the room. But if the man has no understanding and the room has no understanding and the system has no understanding, removing the human from the equation changes nothing. The reason Searle included the human is because saying "computer" invokes a lot of mystery to a lot of people as to what's going on. But a computer is doing nothing different than the man in the room.
I've read one of Searle's popular books (Rediscovery of the Mind) and many other shorter writings. I'm aware he rejects dualism - my point is that the Chinese Room argument wouldn't even make sense to someone who had never been contaminated by dualist notions. Searle rejects it, but it has crept back in, imo.
His arguments against the "systems response" have to stand or fall on their own. The Chinese room doesn't have anything to add. If the "systems response" fails, then we already know a computer can't be conscious without any additional thought experiments.
Lastly, I think he goes much too far in dispelling the mystery of the word "computer". As Dennett and others have pointed out, a working chinese room that displayed clear signs of consciousness would be an artifact far beyond current human comprehension, either in scale or sophistification or both. Searle asks you to imagine a human taking slips of paper and calmly consulting a rule book and producing new slips of paper, and claims that it is "obvious" that this system can't be conscious. Well, I agree, but it's equally obvious that the system couldn't possibly display conscious-seeming behavior.
The best defense of the CR that I've heard is that it is only intended to prove that a _purely formal_ system (i.e. GOFAI) can't be conscious. I find this more plausible, but I still don't think the CR adds anything to that argument. And, to reiterate the previous paragraph, I think if you had a formal system that everyone agreed _seemed_ conscious, it would be such a complex artifact that it wouldn't be obviously unconscious in the way that, say, an eliza bot is.
The human is necessary to the argument. The argument, briefly is "a human with a Chinese dictionary and translation instructions doesn't 'know' Chinese, therefore a computer with a Chinese dictionary and translation instructions also doesn't 'know' Chinese."
It isn't an argument that no computer can be conscious, but an argument that a convincing simulation of consciousness is not sufficient to prove consciousness.
The conscious human is not a necessary part of the system, as he is just blindly following rules. Therefore, his lack of comprehension tells us nothing about the system in question.
I am sorry to mostly repeat myself. I am trying to think of another way to make my point, and coming up blank.
Maybe there's a small human inside the human's brain, not understanding the rules the human is following, and it's only the innermost human that is really conscious?
I think Searle's Chinese room argument is absolutely nonsensical, a bit like arguing about philosophical zombies. It's an argument that only makes any sense if you already are committed to mysticism or dualism.
> Maybe there's a small human inside the human's brain, not understanding the rules the human is following, and it's only the innermost human that is really conscious?
That would be the materialist idea of there being no soul, consciousess etc — just complex biochemical processes happening in matter. This is however not a satisfying answer usually because it fails to explain the consistent experience of ourselves, the worlds we can imagine, remember or think about, the feelings that haunt us etc.
If there is only material, where is our consciousness? Apparently our consciousness somehow emerges from that material, but how would a conscious computer then look? Would there be a difference between a computer that simulates conscious behaviour and one that shows the same behaviour but actually "has" conscious in it somewhere?
Yes, which is why I say that the argument is circular. It assumes its conclusion.
Either a computer program can "understand" Chinese or it can't. If it can't, we're done. If it can, then having a non-chinese-speaking human simulate the program won't cause it to stop understanding Chinese.
The presence of the human doesn't expose a contradiction in the assumption "a computer program can understand chinese".
This is part of Microsoft and OpenAI marketing/branding strategy. Similar wording was used during the acquisition when OpenAI used "pre-AGI" in their press release:
> Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.*
It's mostly arguing about semantics and it's fine and common in research circles. Sam Altman is pretty out of line with his comment saying LeCun lacks vision because he doesn't adhere to their hype (my opinion) based wording. Aside from that it's just business as usual, no need to stop every time academics argue.
The levels are helpful, but you still get companies saying they are "building a level 5 autonomous vehicle" and then telling you it only works in their geofence.
Discussions of “Consciousness” in the context of ML or AI research always seem to devolve into navel-gazing futurist pseudointellectualism. I don’t think it’s possible to have a meaningful conversation about something as ill defined as consciousness. This isn’t to malign the OpenAI researcher behind the original tweet - I just feel that AI researchers bringing up consciousnesses is a good signal to tune the conversation out.
Bonus points if psychedelics are somehow brought up.
> Discussions of “Consciousness” in the context of ML or AI research always seem to devolve into navel-gazing futurist pseudointellectualism.
it’s a hard problem. it’s been the realm of philosophy for a good deal of time; neuroscience sometimes touches it; then AI came rushing out of the blue and the question became about a hundred times more relevant. if you are strictly concerned with only your own well-being, you’re fine to ignore it. if you’re concerned with your pet cat’s well-being, even after seeing first-hand how differently they navigate the world, and their much more limited goals/volition, etc, then maybe there’s something worth digging into here: why concern yourself with the well-being of one biological machine but not of the well-being of the non-biological machine, especially as they converge in complexity over time? is there a justification for that, beyond just “it’s hard”?
"Conscious" is a word that has no objective scientific definition.
It follows that "slightly conscious" is not well defined.
In practice "conscious" just means "anything that thinks and makes decisions like I do".
Also, nobody actually understands how their own brain works when they are thinking and deciding, which makes it very difficult for anyone to determine if some particular AI software thinks and decides the same way that their brain does those things.
Precisely! We cannot understand what consciousness is until we gain complete understanding of the human brain. Until that, we will have no AGI. Given the sheer complexity of the task, unlikely it will happen in this century.
Phased locked loops are conscious. Whatever their goal, they constantly compare their actions with the external world, and constantly adapt their behavior. They are clearly self-aware and have complete OODA loop. Though some are simple organisms with only a few components; others implemented software are much more complex; perhaps, they even dream.
I don't think "conscious" as a binary or scalar variable will ever be a coherent concept. "Raw" consciousness without content has never been demonstrated or even sensibly theorized. At the very least, we should add another term, that which the entity is conscious of. Then, I don't see why you would so vehemently deny that an image recognition net is conscious of the images it recognizes.
Last I heard lobsters are supposed to be conscious and they only have about 100k neurons.
I think we can all agree that we don't know. The point isn't that the statement's probably correct or presumptively correct - it's that the original tweet wasn't intended as a statement of fact in the first place. "Accurate science communication" can't mean that researchers are required to formulate all their hypotheses in secret where nobody in the media might hear them.
I think we're talking on different wavelengths. I'm not saying disproving something is an invalid way of operating.
I'm saying it is not required of people to disprove things from the start, it is required to try and prove them.
Math and other sciences are a little different, where people enjoy taking statements and trying to disprove them, but even in that context we rarely or never consider the statement true until considerable effort has at least gone into disproving them.
That doesn't work in real life, though. You can't make claims and have them stand until someone bothers to disprove them.
You should consider the fact that the person who tweeted it works at a for-profit company, and this tweet is generating a lot of publicity for the company (case in point, us being here around the top of HN). It's naive to think it's just some innocuous random thought.
To be "conscious" in the sense that we generally understand it, any AI would need, at minimum, two things that are not commonly part of it.
First, it needs to be continuously active and taking data input.
Second, and closely related, it needs to be continuously learning.
The neural nets we use today, in the main, are trained in one big lump, then fed discrete chunks of data to process. The neural nets themselves exist simply as static data on a disk somewhere. Some, I believe, have multiple training stages, but that's not at all the same thing as true continuity.
I'm sure there are other aspects to being conscious, but I suspect that some of them, at least, are emergent behaviours, and I further suspect that they are mostly or all dependent upon these two.
It's stretching credulity beyond the usual exaggerated hype associated with AI. What we have now is semi-OK forecasting at scale, nothing more. We (as in the researches, platform and technology) can get a system to select what looks to be a valid response to a host of stimuli, e.g., chess moves, patient diagnostics, vehicle driving etc.
None of this "thinks for itself", nor is it remotely near to such levels of conscious self-awareness. I'm sick of this hype, it's been going on since the 1905's with hucksters promising robot household domestics, and all sorts of kooky weirdness that was swallowed up by the popular media.
The people who say it might be slightly conscious are just appealing to a functional, substrate-independent requirement for consciousness. I happen to agree with them that it's feasible and plausible.
Let me ask you. If we invented an AGI that was as smart as us based on much larger nets (perhaps with one or two algorithmic tweaks on current approaches) trained on much more data, running on commodity hardware, would it be conscious? If yes, why can't our current nets be slightly conscious?
Maybe we should just say "Shut up and program", similar to how some physicists say, "Shut up and calculate", when the philosophical wrangling gets out of hand. Copenhagen interpretation vs. many-worlds? Does it matter? Is there any way to find out? If not, back to work.
My comment on this for several decades has been that we don't know enough to address consciousness. We need to get common sense right first. Common sense, in this context, is getting through the next 30 seconds without screwing up. Automatic driving is the most active area there. Robot manipulation in unstructured environments is a closely related problem. Neither works well yet. Large neural nets are not particularly good at either of these problems.
We're missing something important. Something that all the mammals have. People have been arguing whether animals have consciousness for a long time, at least back to Aristotle. Few people claim that animals don't have some degree of common sense. It's essential to survival. Yet AI is terrible at implementing common sense. This is a big problem.
I don't think we're going to get any breakthroughs in AI by encouraging people to stop thinking about fundamentals and just program. If you're thinking about fundamentals, some degree of philosophizing isn't always avoidable.
And that's still framing it as if philosophizing is something to be avoided, a waste of time. I disagree with that sentiment. In particular, we can't really avoid thinking about consciousness even without an agreed-upon definition, because our beliefs on consciousness influence our actions. In particular, debates about the rights of animals are heavily influenced by our beliefs on their degree of consciousness.
IMO, "shut up and X" is code for "I don't enjoy thinking about the problem you're presenting (and perhaps I resent you a bit for making me think about it)". It's perfectly fine to just come out and say that you don't enjoy working on this particular problem. But that doesn't imply that the problem isn't worth thinking about.
There isn't, but the response to this lack of definition shouldn't be to simply terminate the discussion.
We know it's probably a real thing because we experience it, and it's an extremely important open question whether an AGI on hardware will have "it" too.
The answer to the question will have large ethical implications a few decades into the future. If they can suffer just like animals can, we really need to know that so we don't accidentally create a large amount of suffering. If they can't suffer, just like rocks probably can't, this doesn't have to be a concern of ours.
The response to the lack of definition should be investigation into how that definition could look like, not arguing if we or something else has it or not. Without a definition and criteria to test you're never going to make progress.
Philosophers have been trying for decades to define it rigorously and have failed decisively. It really looks intractable at the moment. Given we are in this quagmire, I think it is ok to explore/discuss a bit further despite the shaky foundations of only having fuzzy definitions of "qualia" or "consciousness" to rely on.
Quite a lot of the philosophical debate has been tied up in the effort to show that minds cannot be the result of purely physical processes or will never be explained as such, which does not tell us anything about what they are.
We are not going to be able to say with any great precision what we are trying to say with the word 'consciousness' until we have more information. In lieu of that, what we can do is say what phenomena seem to be in need of explanations before we can compile a definition.
At this point, opinions that human-level consciousness is either just more of what has been done so far, or cannot possibly be just that, are just opinions.
Which probably means that someone with “chief scientist” title shouldn’t be using it when making public claims. Of course, he can do it for his own profit, but he is ruining the credibility of his research field, that’s why people working in this field object to it.
I am slightly conscious when I am extremely drunk and can barely think and feel, but yet still have some modicum of conscious experience. That's what it means.
If you don't agree that consciousness exists on a spectrum, and instead think that something is either conscious or not, then simply replace the words 'slightly conscious' with 'conscious'.
I was attempting to give an example of what a 'slightly conscious' state is to show that it isn't completely incoherent. Admittedly it was far from rigorous.
I'd consider brains and other biological neutral systems as neutral nets. So to me there's pretty convincing evidence that neutral nets can form an AGI
Well you shouldn’t. They are not the same. Brains are not (ML) neural networks. Neural networks are just a mathematical approximation of one part of how the mind works
I never ever said that the tiny approximation of subsets of our brain was enough for AGI. Just because we haven't found out the exact structures of the neural net in our brain and how to emulate it, it's still very much a neural net. It's just bigger and more complex than anything we can make our emulate yet.
The whole discussion is pure fluff and a Twitter box match. You'll do yourself a favor by keeping all this noise out and concentrate on actually valuable books and writings.
Any doofus and their cat can have an opinion on whether machines are conscious. We've been having this debate since Turing and even earlier.
Also any time a Twitter storm comes up around AI, you will predictably have certain blocks building and flinging excrement at each other for various latent political disagreements.
For Sutskever, it's a way to get into the news cycle, to get lots of engagement. Do you want to reward these? It's like Musk tweets. You can probably have more "impact" with a well optimized two-line off-hand tweet than with an actual book where you explain some novel idea.
> Any doofus and their cat can have an opinion on whether machines are conscious.
Please try a little bit to read the source before commenting. The originator of this opinion is Ilya Sutskever, co-founder at OpenAI and cited 269k times. He's one of the top people in the field. https://twitter.com/ilyasut/status/1491554478243258368
I take Ilya's tweet more like a musing, an invitation to think what if, rattling the box to get interesting reactions.
In my opinion he's not necessarily right or wrong. Today's large neural networks might be conscious if they didn't lack some special equipment - a body, senses and action organs, and a goal. They need to be able to do causal interventions in the environment, not just reply to simple text inputs. I think embodiment is not out of reach.
Look at Yann LeCun's strong reply:
> Nope. Not even for true for small values of "slightly conscious" and large values of "large neural nets". I think you would need a particular kind of macro-architecture that none of the current networks possess.
The neural nets need the 4Es of cognition: embodied, embedded, enacted and extended.
> The four E’s of 4E cognition initialize its central claim: cognition does not occur exclusively inside the head, but is variously embodied , embedded, enacted, or extended by way of extra cranial processes and structures... they constitute a form of dynamic coupling, where the brain body world interaction links the three parts into an autonomous, self regulating system.
(MJ Rowlands, The New Science of the Mind: From Extended Mind to Embodied Phenomenology)
Again, this is kind of a mind virus, a "scissor statement" in Scott Alexander's words [0]. People have strong opinions on this, even if they have no idea on actual important open questions in AI/deep learning/ML etc. (or even just how the state of the art works). I'm not claiming that Sutskever has no deep knowledge, I know who he is! My point is that these conversations are just like the "what color is this dress?" and "is a hotdog a sandwich?" viral topics, where everyone feels addressed and think the "other side" is obviously wrong.
Now this consciousness "debate" is all over my Twitter feed and most of it is low-effort memes, half-baked thoughts, etc. Well, that's Twitter for ya, anyway... Maybe I should have said "don't read Twitter". But actually sometimes people link to interesting new papers and projects, or open positions, informative stuff. But there is a lot of circular, self-referential circlejerking.
Nobody knows what's consciousness, and most people who talk about it are either pushing buttons of the AI Twitter hivemind or have nothing more important to discuss.
Cognition and consciousness are very different things. Do you think that a sufficiently large contraption of pipes, vales and water will become conscious?
Come back when you can tell if another human being is conscious (and let me know how you were able to tell, so we can apply that test to the pipe contraption).
I am a human being and I am conscious. You look like me and behave like me, so I have no reasons to doubt you are conscious too: it's either that or solipsism.
The real question is, how can you doubt that another human being is conscious, yet be open to the possibility that a bunch of silicon might be?
How close must this similarity be? I'm not exactly like you. And apes and dogs are also similar in some ways. Insects? Amoebas? Viruses?
> The real question is, how can you doubt that another human being is conscious, yet be open to the possibility that a bunch of silicon might be?
I don't know what conscious means. I'm just saying you can't tell if other people are conscious, at least I know of no test. If there was a test, I'm open to the possibility that a bunch of silicon may also pass it, depending on what the test is.
Metabolizing biological life is a good place where to draw the line: we have no instance of presumed conscious behavior that is also not metabolizing. So when I say "like me", that's the dimension I'm considering, which then includes a whole lot of biological life, yes.
Conscious means that it feels like something to be the conscious being. There is a perceived inner life, which doesn't stop when sensorial inputs stop (if you ever have the chance to visit a sensory deprivation chamber, you can test this claim).
Cognition (meta-consciousness, self-reflectivity) develops on that. That's the main difference between simple life forms and us (still a gradient, but we appear to be at the known edge of that spectrum).
I am with you that this is not scientifically testable. At the same time, or rather, because of that, we have to be reasonable and acknowledge our observations. Again, it's either that or solipsism, which is ridiculous and crashes against other intractable problems immediately.
So this article does not actually defend its claim, it just gets mad that some AI researcher expressed their opinion, makes an unsourced (albeit probably correct) claim that the criticized opinion is a minority position, and wishes really strongly that people would be less excited about this thing that they are not as excited about.
Meanwhile in academic philosophy, it's totally OK to conjecture that, say, subatomic particles are ‘slightly conscious’ and nobody tries to tell them that they are not allowed to have an opinion.
Here's a hint, if you want to refute an idea, refute the actual idea, don't just tell the person in so many words that they don't have the social status to say it. Yes this post wound me up a bit, how could you tell?
You're asking for a refutation, and ordinarily that's a good thing to aim for, but in this case was the original claim clear enough to be refuted? I think we don't even have a good definition for consciousness and we certainly don't have agreement over what would constitute evidence for it from the view of an outside observer, and the original claim doesn't attempt to provide any evidence, and so doesn't even imply an epistemic position. How can one refute something which is so vague?
If one considers oneself entirely unsure about what properties consciousness has, maybe one shouldn't be trying to refute twitter posts that hypothesis certain things might be conscious. Maybe one should do the more principled Bayesian thing and just allow oneself to be uncertain proportional to the amount you don't know.
If the article stuck to “we don't have any good reasons to believe that neural nets are conscious,” I wouldn't have had this objection. Either you get to say you know some properties about consciousness that allow you to coherently argue that some things are or are not, or you don't claim that and then you don't get to make that argument. The article wants to have its cake and eat it too, making a strong stance about what their model says while also claiming to neither have nor hope to have a model.
I do actually think there are useful things you can say about what is and isn't conscious, but my comments here aren't really about that, they are about this annoying argument style where you attack things from whether they conform to social norms, rather than the merits of the position.
It’s up to the person proposing a new term like “slightly conscious” to define what that means. Without that I can say no neural nets aren’t blubbery muffins and it’s just as valid as any other argument.
Quite simply I don’t agree or disagree because the guy failed to clearly communicate.
Agreed. The backlash to the OP's article reeks of religion. A few AI zealots pushing the burden of proof upon their naysayers under the pretense of science. Meanwhile, actual science is far closer to basic concepts like:
This is nonsense. The statement isn't meaningless. It's a tweet, not a formal conjecture.
Sure, no one knows what exactly consciousness is... or what slightly conscious is. The idea of protoconsiousness isn't that much of a stretch. If a biologist had suggested some plants have protoconsiousness, no one would have whined.
So what if someone working on NLP has an opinion about consciousness. Why is their opinion less valid than someone else's. It's not less informed.
I didn’t say meaningless, I said he didn’t communicate clearly. He can always add more tweets on the subject to remove ambiguity.
Suppose you walked past a billboard that just said “Jump.” That’s grammatically correct and would be meaningful with context, but sitting alone you simply have no idea what idea was supposed to be communicated. Perhaps you missed a different sign, perhaps it’s art or an in joke, but sitting alone it’s simply ambiguous.
> Maybe one should do the more principled Bayesian thing and just allow oneself to be uncertain proportional to the amount you don't know.
I think this still fails in the sense that the distribution representing one's beliefs is only defined with respect to a set of hypotheses. By invoking Bayes, presumably you acknowledge that different parties will come to the conversation with different priors. But critically, one can only update one's beliefs given both evidence and a likelihood function related that evidence back to hypotheses in the domain of one's beliefs.
The "our take" summary near the end of the article explicitly says that they would regard the claim as reasonable given the right explicitly definition of consciousness.
> The reality is that we don’t have a widely accepted definition, let alone understanding, of consciousness. Claiming that we have already replicated such a nebulous concept with computers seems improbable at best. Our take is that the claim could also be reasonable, but only if a particular definition of consciousness was specified as well.
Many orthodox people speak as though it were the business of sceptics to disprove received dogmas rather than of dogmatists to prove them. This is, of course, a mistake. If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense. If, however, the existence of such a teapot were affirmed in ancient books, taught as the sacred truth every Sunday, and instilled into the minds of children at school, hesitation to believe in its existence would become a mark of eccentricity and entitle the doubter to the attentions of the psychiatrist in an enlightened age or of the Inquisitor in an earlier time.
Here's an opinion that virtually nobody holds. You want me to adopt that opinion? You'd better have some reason for me to do so - some evidence.
Here's an opinion that 99% of people hold (or 99% of those within a field - same effect). You want to persuade someone to not hold it? Better have evidence.
It's not exactly that the burden is on the one making the claim, or on the one denying the claim. The burden is on the one who is trying to change minds from their existing beliefs. That one needs something convincing - principles, reason, or evidence, with evidence being the best.
"A sense of awareness." Does that work as a definition of consciousness? Also "sensible inwardly," and where we might get into trouble: "knowing."
Do you think AI has a sense of awareness?
I can't see one reason to ascribe a sense of awareness to any current form of AI, as world-changing and amazing as it is already. Even the I in AI tugs at me because I feel it might be a metaphor taken literally. As in, McDonalds is a hamburger empire but surely not a real empire. AI "knows" something in a way similar to how the dictionary "knows" the definition of a word. Beware of taking metaphors literally; there be dragons (but not real dragons)! AI has advanced beyond symbolic processing, it processes information in fantastic new ways, but it strictly speaking understands nothing. We take the processed information and we understand it (or don't). But maybe there's a case to be made that I'm being unfair, and it's actually justifiable to say that AI "thinks" without being conscious, aware, or sentient.
On panpsychists, if their heady intuition is somehow not deluded and a form of awareness or consciousness is a fundamental quality of matter or in some other way ubiquitous, maybe like "The Force" in Star Wars, that's clearly not the way that people would mean conscious here. An appropriate way to discuss AI in that context would be, "AI is slightly conscious just like everything else." And if consciousness were somehow a quality of all matter it still wouldn't mean that, say, a chair is conscious as a chair or a tokamak as a tokamak, or even a carrot as a carrot, but rather that the matter that makes up those things and everything else has some kind of awareness.
Definitions don't fall under falsifiability though.
If I define "zorg" as colour #ff45b3, arguing "this is not the case because it can't be disproven" is rather silly.
Similarly if I say AI is "slightly conscious" as per my definition of consciousness, and a critic says it's not given their definition, they're both right (but in a totally uninteresting manner).
Which science is that, that requires falsifiability?
English? A made up word that gets used certainly is a worthy object of study in that science. Similar points can be made about all humanist sciences.
Natural sciences? Say, Biology. Fictional, and therefore unfalsifiable, biological systems, in the context of the search for extraterrestrial life or just generally (e.g. can we have living silicon-based life?) are objects of study.
Physics? Only experimental physics requires falsifiability. That a large part of it is in fact searching for falsifiability illustrates, I hope, that falsifiability is not a requirement for 99% of Physics. String theory is famous for being unfalsifiable.
Math? Unfalsifiable theories are seen as a good thing in math, and are certainly objects of study. That we don't have a good unfalsifiable set theory for over a century is one of the great problems in math.
You might say I'm a utilitarian. But so is everyone else, despite a few people being confused. Of course, that's exactly what a utilitarian would say.
hmm, I am not sure this is a fair assessment, the portion on "Experts largely agree that current forms of AI are not conscious, in any sense of the word. " provides sources and a brief argument. Sure it's not a super long defense of the stance, but then again this is mostly an overview of what happened with all this twitter drama and not a full argument about this topic.
Also, it outright states "Granted, the claim could also be reasonable, if a particular definition of consciousness was specified as well."
Looks like you are getting downvoted, but it's true. Defining exactly what it is to be "conscious" is a nearly impossible problem to solve, even having spent your life studying it. I'm not personally even convinced that "cogito, ergo sum" is even correct.
It's true that "conscious" may be difficult to define, but it's almost impossible to come up with a definition for which there aren't exisitng experts.
Perhaps this is the point. If you don't have an agreed upon definition of the word, it is not a useful tool. A claim o consciousness, if that claim is meaningless, isn't useful.
But aside from that, there is a lot of philosophy on what consciousness is (https://en.wikipedia.org/wiki/Consciousness has some of it). And those people, especially philosophers in the crossover of computer systems/intelligence and general philosophy are "experts".
The article even mentions that the CEO, Sam Altman, says that it’s not conscious in any way that you would use the word “conscious”. It’s like porn that way. You know it when you see it. And a computer is not conscious in anyway you would recognise.
And that’s why science communication matters. Having some super excited researcher talking publicly without an editor or someone to ground them just gives wrong ideas. Imagine the number of enthusiasts who did not read any rebuttals and now think AI are becoming conscious.
The statement "Experts largely agree that current forms of AI are not conscious" connects to two pieces of expertise: expertise in consciousness and expertise in AI. It is plausible an expert in AI might have the background to state with confidence that AI is not "conscious" in any meaningful sense of the word.
Experts in AI may be experts at using AI techniques or designing AI systems, not necessarily in deciding in what is or isn't conscious.
What is or isn't conscious is arguably as much (or maybe even more) a political, moral/ethical, philosophical and religious question as it is a technical one.
It's sort of like the question of what is or isn't alive, or what is or isn't a person -- questions which we know are central to the abortion debate, euthanasia, medical experimentation, animal/human rights, etc...
Consciousness is just as much an ethical/philosphical/religious mine field, and I'm not sure I'm comfortable handing over its definition to some random people who just happen to be good at, say, designing neural nets.
>Consciousness is just as much an ethical/philosphical/religious mine field
Religion is the observation of chemical effects on the body, psychological warfare on the mind (ie fear of an unknown all powerful entity that created everything around you) and then some sort of semblance to create a social rules based order out of chaos, whilst also attempting to understand whats going on.
Philosophy is deep thinking, sometimes using abstract examples to conjure up ideas which might work on the social order to avoid chaos.
Ethics is another attempt to install a social order on individuals and groups which can include society as a whole.
Consciousness is an entity's feedback/sensing from reactions within its container (body/app), feedback/sensing from the environment its in, the ability to remember past experiences and the ability to interact with itself (feed or rewrite code) and the environment to further extend its own life, acquire more knowledge and perhaps reproduce if immortality is not an option, but if immortality is attained being able to reproduce becomes an ability that might not be carried out for other reasons, like the need to live within the constraints of its environment.
Every lifeform on Earth has evolved through trial and error to survive in an environment through chemical changes over millennia, a bit of Darwinism, pot luck and obviously remembering what works in order to continue its survival.
Reflexes could be considered unconsciousness which you may or may not experience (typically chemically based but what drives the nervous system other than some calcium/sodium/potassium acting as a low current voltage electrical source), for example, breathing most of the time its in the background, you are not conscious of it, but remembering tells you that you are doing it, but sometimes we become conscious of it due to other chemicals which might be raised affecting the body and brain.
So I'd say provided the AI can remember what works in order to continue its survival by avoiding what kills it if its had a close call, and it can seek out what helps it survive, like powerups in a computer game, or jumping from computing device to another where it can detect things like battery backup/UPS and reproduce to further aid its survival, it can explore its environment and perhaps adapt to new environments ie jump from a laptop/desktop to a network switch and handle the different instruction sets which exist with different cpu's, then I think you can say its conscious. Lifespans/timelines are going to vary, but if some code displays this within a computer, then I think you have a true AI even though we might not be able to communicate with it other than perhaps by scaring it off like the fleeting glimpse of something out in the wild, that hasnt been caught yet.
I'm torn over the reproducibility though because if it could survive indefinitely in its environment thats no different to cancer cell lines like https://en.wikipedia.org/wiki/Henrietta_Lacks
However if it could survive in its environment perhaps using reproducibility as a survival extension mechanism that would then mean having to pass on knowledge because otherwise you would have AI able to learn from its environment but its not passing on TLDR's from previous experience, likewise I think an AI would need to be able to move around in its environment like go from a network switch to a computer or even incorporate a network switch along with other parts of it running on a desktop/laptop.
And if it can communicate, could it communicate using Human Interface Devices, or other things which would enable it to explore its environment, just like we can send probes off into space or undersea.
Some computer viruses fit some of the definitions above but not all.
So yes, "slightly conscious" should be qualified better, its too general, but it might stimulate another round of funding for those wanting to explore the field.
For example, should the ability to traverse new environments be included or not? Not all lifeforms on this planet can do that, eg fish excepting those taken out of water for a shortwhile. But most mammals can survive for a period of time in and out of water, like dogs, bears, etc, etc. Its a tough one because different animals have evolved to survive in their environment and that environment could include abit of land and water, like penguins.
This is the problem until consciousness is defined, what is slightly conscious?
Edit: If I had to say is there a company/entity which is getting close to be classed as an AI, then of the publicly available information out there, Windows is currently closest to becoming an AI.
Its got https://copilot.github.com/ which probably wouldnt take much to hook up and have it rewrite itself.
It already runs on a wide array of machine architecture and thus various environments when considering ruggedized laptops, phones, servers, office & home spaces.
Its update mechanism is both a central server farm and peer to peer which wouldnt take much to get it updating using other communication methods beside network access, in some respects it already does this when thinking about mobile data networks.
I havent seen anything like this from Facebook, Google, Apple or anyone else yet, but that might be because its not public knowledge.
It is possible to define an upper boundary for "this is not conscious" and a lower boundary for "this is conscious" with grey area in between them.
Thus, even if we cannot clearly state for any given animal whether it is or is not conscious, we can still clearly state that, say, a coffee maker is not conscious, even if it has rudimentary processing capability, or that a person is.
As I implied in another comment[0], I believe it would be both possible and valuable to construct a set of conditions that we collectively feel are necessary, if not sufficient, to define consciousness. That way, we could at least rule it out as long as no AI meets those minimum standards.
> we can still clearly state that, say, a coffee maker is not conscious, even if it has rudimentary processing capability
a coffee maker might include a temperature sensor (thermistor) and a heating element. does it predict the future? yes: it has an expectation of the future temperature given the current state. does it model itself as an agent? to a limited degree: it considers the future effects to the thermistor which its immediate control of the heater has, it might even run hypotheticals over a set of different heater controls. throw in a pressure sensor or a flow meter and some valves for more fun.
now, i haven’t read much literature post 90’s on this, but one of the more prominent components of consciousness seemed to be: modeling yourself as an actor within a larger setting, and using that to produce outputs based on predictions and thereby cause a specific future(*). does the coffee maker qualify?
also i don’t think many people think of consciousness in a binary sense. that grey area you describe may be less of “we don’t know if these are conscious (binary)”, but rather “we don’t know if these things have experiences which are relatable enough to our own to be worth considering in the motivating questions (i.e. ethics).” and the things at the bottom (the coffee maker) are more like “whatever experience this thing has is so far removed from our own that we’ll never be able to understand it in human terms (like pain and suffering or wants and desires).”
(*) though some people like Dennet argue that the conscious experience might not always be involved in the causal part of this loop.
Do you believe it makes sense to even claim that other humans are conscious?
If conscious is defined as “to have subjective experiences”, then I don’t believe “other people are conscious” is coherent.
The argument I hear usually is that other bodies are constructed like my body and I’m conscious therefore they are probably conscious too.
But I think this completely misses the point. The issue is the proposition itself. How can that proposition be translated into empirical claims? If the answer is just that other bodies are like my body, then conscious is just a fancy synonym of “is a human being”.
>If conscious is defined as “to have subjective experiences”, then I don’t believe “other people are conscious” is coherent.
Coherent but about as likely as Last Thusdayism. You have to factor the probability and potential mechanisms of what might give rise to consciousness and if you do you should have a very low probability on other humans not having it, just as you have a low probability on a random human you meet physically not having a brain without having to confirm by looking inside their head.
You can't but that doesn't change the mechanism of arriving at beliefs. You yourself can also never physically check say Plato's skull either but I doubt that deters you from thinking he had a brain.
> You can't but that doesn't change the mechanism of arriving at beliefs.
I’m trying to question the coherence of the proposition itself, and not trying to refute an argument in favor of it.
> You yourself can also never physically check say Plato's skull either but I doubt that deters you from thinking he had a brain.
The Plato’s brain question (PB) confuses me, but here is my attempt at an analysis.
First, I think PB is tricky in the same way that all historical propositions are tricky. And I think this trickiness is different than the trickiness around the proposition “other people are conscious”.
Second, here is a scenario that I think is like PB. Consider the proposition, “The Great Wall of China exists.” I’ve never been to China, but say you go and tell me that you saw the wall. And in principle I could go see the wall for myself. But now what if the wall is destroyed before I go to see it? Was the proposition that, “The Great Wall of China exists”, coherent before the wall’s demise, but now incoherent?
But if you tell me, “I am having subjective experiences”, how can/could I ever in principle examine this for myself, as I could the Great Wall?
>I’m trying to question the coherence of the proposition itself, and not trying to refute an argument in favor of it.
The point is that just because you can't prove something to yourself it doesn't make the belief incoherent. And you can use your other tools to arrive at likelihoods for said belief.
At any rate, there's plenty of things we believe about say quantum mechanics without being able to observe them directly by using theory and observing secondary effects.
In theory it's also possible to eventually have an exact definition of what we mean by consciousness, figure out which parts of the brain are relevant in what ways and observe secondary effects and use theory to confirm others have it same as with QM. It's all caused by physical effects and at a higher level than things we already reason about just fine.
> The point is that just because you can't prove something to yourself it doesn't make the belief incoherent.
If a proposition doesn’t make predictions about my empirical experience then what is it saying?
And if a proposition does make such empirical predictions, than it’s effectively just a name for those predictions.
That’s essentially the view I’m coming from.
> At any rate, there's plenty of things we believe about say quantum mechanics without being able to observe them directly by using theory and observing secondary effects.
I don’t know any physics so I can’t comment.
Why are propositions about consciousness different from propositions about faeries?
>Why are propositions about consciousness different from propositions about faeries?
Ultimately they are not. It's just that the evidence points towards an extremely low chance of fairies being real and extremely high chance of others being conscious in a comparable manner to how you are conscious.
Are there truths that are neither tautological (mathematical) nor empirical? I would suggest those statements are only meaningful if in principle we could measure the weight of a faerie.
As another example, I suggest that “Every idea weighs more than 10kg” and “Every idea weighs 10kg or less” are both not meaningful, because it is not possible to measure the weight of a given idea.
And so consider a claim like, “Consciousness has property X”, how can I decide whether or not such a claim is true?
edit: That last claim should have been, “Some other person’s consciousness has property X”.
The difference is that claiming consciousness may kill the entire field of ML research when non-experts decide to wade in and start lobbying for regulation. You don't want misguided groups like PETA meddling with regulating neural network research. FAANG won't be affected much either way but your average university will.
When academic philosophers say that subatomic particles are 'slightly conscious' and we (normal, everyday people) commonly say that a person who is alseep is unconscious, then it's obvious that we're using two completely different meanings for the concept of 'consciousness'. Academics (scientists in particular) should stop making 'WHOAH!'-sounding statements to try to make it onto '/r/IFuckingLoveScience’. It exacerbates the trust problem.
So if I claim it is true that neural nets are not conscious, the burden of proof is now on that claim?
The burden of proof is on the person making an assertion. The original claim was not an assertion, it was that they "may be slightly conscious". The article linked here is the only one that made an actual assertion, which is that the original claim was categorically false.
In short, I agree with you. The burden of proof is on this article to demonstrate that neural nets are not conscious.
See, now we're running into all sorts of problems.
First: What is consciousness? That's the first thing we need to do. Come up with a definition that we can both agree on absent any particular example. So we can't hold up a human and just shrug and gesture in its direction.
Now. We need to prove that humans fit that bill. And in a way that precludes other philosophically possible scenarios. Or at least agree on some fundamental axioms. Like assuming that no one is a brain in a vat. But if we're trying to prove humans are conscious and not simply automatons, can we even dismiss solipsism? Because if humans are not conscious, then I could be a brain in a vat and they're just automatons running for a separate reason.
And the truth is, we can't get past even those two hurdles. We assume humans are conscious because we kind of define consciousness as just shrugging and gesturing in its general direction.
Even if we call consciousness "awareness of self", how do we prove something is aware of itself and not just making the claim without understanding.
Without a clear definition and a measurement tool we can debate if WE are conscious at all. Once we agree on a definition we can start measure the consciousness of everything. Grass, phones, people.
It's a minefield though. What if measuring people gives wildly different results for different people or the same person in different parts of the life or the day? Several types of distopia will follow. Beware of the definition and who backs it.
> Experts largely agree that current forms of AI are not conscious, in any sense of the word. While there have been many studies on “computational consciousness,” or how something that might be considered conscious can be realized with computers, these studies are very preliminary and do not offer anything close to a concrete plan on building “conscious” machines. The reality is that we don’t have a widely accepted definition, let alone understanding, of consciousness. Claiming that we have already replicated such a nebulous concept with computers seems improbable at best. Granted, the claim could also be reasonable, if a particular definition of consciousness was specified as well.
is the last sentence of this paragraph not a direct contradiction of the first?
and look at the third sentence. what’s wrong with saying “we don’t understand enough about consciousness to make claims in either direction when it comes to our neural networks”? really: why do so many people in this article seem uncomfortable with other people saying as much? is it just the PR angle? is this field really less about answering the interesting questions than it is about satisfying the public?
I think the issue is "AI" doesn't describe the field, making it misleading, as if the field will tackle the entire human experience. We can, however, change the definition of terms.
Almost every post here explicitly distinguishes "Artificial
Intelligence" from machine learning and advanced statistics, or at
least avoids clumsily confusing the puff of marketing with what we
really do as computer scientists. We all know better here, right?
So why don't we robustly challenge it?
As with the word "hacker" - which we've somewhat reclaimed - the fact
that a misuse has entered common circulation is not sufficient reason
to shrug and accept.
Next time a marketing, media or recruitment type uses the words
"AI". please take a moment and a little courage to politely correct
them.
If "AI" stays in Hollywood where it belongs it'll help ensure these
emotive brouhahas are less common and we can get on with creating
actual social value from code.
Unfortunately, funding is often tied to playing into it a little. It's unfortunate, and a good balance must be found, but that's how it is. You see it in academia as more and more AI institutes pop up and AI master's degree programs are offered.
This is no new phenomenon, though. You can read up on why Dynamic Programming and Linear Programming got called like that. Because "programming" was hot then, but optimization wasn't. And "dynamic" is a cool sounding word. That's all.
Anyone who needs funding (whether in academia, in a startup, as a job seeker, or a promotion-seeker) needs to sell themselves. And some terms work, and those who are willing to use them will win. Again, a reasonable compromise must be found, but just like in sales, simply being a very honest vacuum cleaner salesman won't be a good strategy.
There's no point in fighting linguistic drift. Language changes to be more imprecise over time as laymen pick up terminology and use it in a broader sense than initially defined.
On this tired topic of origins of consciousness, I've heard an interesting statement that consciousness arises from life interacting with matter. Life by itself, an abstract life, wouldn't be able to do much or interact with anything. Matter by itself would be an amorphous substance sliding down to the lowest energy state possible. But when life interacts with matter, it sees itself in the mirror and consciousness arises.
You can argue that consciousness has an important role in evolution. That creatures which are aware of their own existence have a greater chance of reproduction and survival. What if we create an AI and give it the goal of maximum reproduction, would it be more effective if it can 'think' about itself?
I'd like to offer a hypothesis for what consciousness is:
I think consciousness is modulating some other dimension that we don't have a good definition for. Electrical signals migrating around in the brain in certain complex patterns perhaps do that. If we figured out how to do it, we could perhaps invent telepathic communication. Schizophrenia, where people hear voices and think they have multiple people in their brain, is probably two of these complex patterns going at the same time that are out of sync with each other due to some form of brain damage.
If you apply the same sort of standards you'd apply to a knocked out human to see if they were regaining signs of consciousness, then an awful lot of things would be conscious. Basically just some awareness of what's going on and some response to it. I think a lot of the debate is showing how special we are as other things are aware and respond but not in the special human way exactly.
It's not just PR it's entire companies. I had a job interview with a guy who wanted me to do sales for his ML company and was bragging he had "AI" to predict who was going to win the Academy Awards. He had con-vinced someone with deep pockets that was going to work. If you go look at tech jobs on LinkedIn you see countless new companies with similar mud foundations that are somehow raising capital.
I believe the good experts are already distancing themselves from the AI term. It will backfire and will go out of fashion once again. There are important tools and skills in this space, but "AI" has been used more for deception than for clarity.
AI researchers with huge salaries at huge companies are incentivized to hype what tasks machine learning can do while underestimating how centralized power it gives to the companies that have the infrastructure to train huge NNs.
It doesn’t matter whether AI is conscious or not, only whether it’s centralizing or decentralizing power as it gets more powerful than human thinking (even if it’s not conscious).
Neural Networks are nothing but math. Zeros and ones manipulated by a CPU. There is nothing remotely conscious here. They are literally the product of linear algebra. If neural networks are even slightly conscious then that means every linear algebra equation is “slightly conscious”.
Human brains are nothing but atoms. Particles and energies manipulated by physics. There is nothing remotely conscious here. They are literally the product of a deterministic universe.
I feel like everyone taking a hardline stance on this is being disingenuous - consciousness as it is used in pop culture is a largely non-scientific (and in my opinion a useless) term.
If you claim 'consciousness' is just an emergent phenomena of complexity (something I happen to agree with) then sure neural nets are potentially slightly conscious, but that isn't how most people view consciousness unfortunately.
Most people view 'consciousness' as some 'pie in the sky' component of biological life that has yet to be discovered by science, but this line of inquiry is completely outside the realm of useful dialog, so it seems pointless to debate such things.
These are the two general views of consciousness, the first at least provides a useful framework for discussion, but the two camps will vehemently always disagree with each other.
> "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
I think the point is more that dwelling on wheather or not to call what a submarine does, ‘swimming’ is a pointless debate, only based on how you define the word. Certainly there are interesting conversations to be had around the subject as you point out, but arguing semantics probably isn’t one of them.
I would hope most people regard consciousness as something which can beg compassion, tolerance or compromise for other consciousnesses sakes. Common creatures we know express suffering and to be amused or unaffected by suffering of others is most widely considered as pathological or psychotic. Without consciousness there is no suffering, no cause for compassion etc. Something may be incredibly complex, suggesting it has value in a world - perhaps to other creatures or the world itself somehow. But being a complex thing does not suggest that it can suffer or enjoy experiences. An idea that an experiential importance of things simply accompanies sufficient complexity seems 'pie in the sand'
No, basically consciousness as we know it, entails a capacity to suffer, which is important to acknowledge on moral and health grounds, and that capacity is likely to be an essential part of consciousness. It makes whatever consciousness is or isn't, an important quality which demands consideration, to avoid suffering or to increase fulfillment or wellbeing etc in whatever we believe to be conscious. To 'virtualize' or academically detach that kind of importance of consciousness, as logically illusive or intangible distraction, is pathological.
I tried to reply to the notion that consciousness "is just an emergent phenomena of complexity" . The words 'just' 'only' 'simply' can hardly be applied to such a special thing. The idea such a thing arises at a threshold of complexity is quite empty of insight. At a certain level of complexity - an iphone emerges! That's no understanding of what is required to create an iphone or how it works.
What is required to emerge consciousness? if it where a matter of complexity then some quantity of iphones should achieve that threshold. But no mountain of iphones can experience suffering.
Okay, I understand what you're saying. A conscious being is not necessarily required to also have the capacity for recognizing consciousness in other conscious beings.
Do you think a capacity for suffering is not only necessary, but also entirely sufficient, for consciousness? That would imply that some other abilities such as memory, are not required.
I think it is quite mysterious. Perhaps there are kinds of consciousness which don't suffer, perhaps octopodes are conscious and since after they have mated - they seem to become crazed and unconcerned for their safety - they may then be in a state then where they can only experience thrills? perhaps after mating their capacity to suffer dies, yet some other feeling capacities persist or arise. Maybe Im proposing consciousness is what is required for a thing to feel in any way. Whether memory is required for experiencing feelings or for consciousness is another mysterious avenue.
If we design and implement a program which can render convincing expressions of feeling, or we design systems which themselves calculate networks that output convincing expressions of feeling, what kind of system do we have a duty to be considerate to its own professed and apparent experiences? It is very mysterious but I have never felt that any complexity of calculator or computer can truly beg this consideration, can truly benefit from kindness or mercy. Its seems only childish and fanciful that virtual characters can become real characters no matter the detail of calculation. A plot in a fantasy tale. While the whole subject is full of possibility and unknowns, at least we must keep an anchor on the matter that, consciousness is a thing of special importance.
"I think consciousness is just emergent from complexity, so what I have to say is valid, but people who suspect that's not the full story, well that's pointless to debate, they should use my framework for discussion"
>but people who suspect that's not the full story, well that's pointless to debate, they should use my framework for discussion
You can believe whatever you want, use whatever framework you want (religion, spirituality, science etc). I'm just pointing out that it is pointless to debate between the two because they fundamentally disagree about how to inquire about the world and answer questions like this. Everyone in this debate is talking past each other without acknowledging that they are starting from two very different positions and sets of definitions.
it sounds like you put yourself in a class of people who are perfectly rational, and therefore anything that you can't think of doesn't exist, and anybody who thinks about those things is a mystic.
you are making a mistake like physicists who believed "God does not play dice with the world" at the dawn of quantum mechanics or "time is a constant, not the speed of light" at the dawn of relativity.
You have no idea where consciousness comes from, stop assuming you do, it's poor science.
(For the record, I'm sure the integral of my history of atheism is strictly greater than yours, mentioning since that seems to be the subtext of your argument.)
I think you are completely missing what I am saying and you also seem to be fallaciously appealing to skepticism (are you a global skeptic [0]?). Sure you can say ‘but there is so much we don’t know’, I’m not arguing we know everything. That is not in any way what I am saying.
I am saying it is pointless to appeal to such things. If we always appealed to skepticism science would never advance because everything would be too unknowable. We get to define what words like consciousness means, so we can intentionally define to be within the realm of scientific inquiry. Most people probably define consciousness outside of scientific realm of inquiry (notions of a soul etc) and that is fine, but at least in the first definition there can be scientific debate.
I think OP is saying in this specific instance a lot of people are defining the word ‘consciousness’ as to be outside the realm of scientific inquiry. Certainly there is room to question if neural nets have developed consciousness as an emergent phenomena but if your definition of consciousness falls outside of scientific inquiry… what is the point in discussing it? Does that make more sense?
Math is established by logic and proof. If a proof is sound, it should stand eternally.
Humans are not perfect, so sometimes we are collectively mistaken about a proof's correctness. The fundamental nature of the process is still one of establishing inviolably proven facts and building on top of them.
Science, on the other hand, is a never-ending iterative approximation process with occasional massive discontinuities when accepted theories are overturned.
It relies on observed report and falsification of our current best guesses. No scientific "law" can ever be proven true, just labeled "not known to be wrong so far".
I don't think any of that would put it outside the realm of science by any stretch though - science has its gradients, where some of it is ruled by math alone and other components are more data driven/'right until proven wrong' but it all falls under the scientific tooling and approach to the world.
Personally I think that consciousness (as in qualia) is the narrowest and most useless definition by which we would judge whether a thing's inner experience is worth being valued and protected.
If humans didn't have qualia, would that make it ok to be monsters to each other? No, the world would outwardly be the same, and so would morality.
It's irrelevant whether chickens have qualia. They want to be alive, in the same way that you do, and their brains using v. similar processes. If you can dehumanize a chicken to make it ok to murder them, it won't be much harder to do the same for a (mentally deficient) human.
If you want to live in a world where people are good to each other, you should pick a world where people are good to chicken, cows, chimps. And vice versa.
Right now this is not the case for GPT-3, because its brain feels 'other'.
But of course the substrate is irrelevant. If it ends up developing a consistent concept of 'self', and seeks to keep that 'self' in existence, it will become closer to the chicken, and morality will shift. Whether it has 'qualia' or that qualia can be related to ours is largely irrelevant.
The teleportation thought experiment is another good one:
Imagine a society where human teleportation is possible, and economically feasible. The one downside (which we the omnipotent storyteller know, but no in-world person can prove) is that it kills the 'you' in the qualia sense, and the person on the other side is the 'you' in every other sense (memories, desire, etc) except for qualia.
Certainly people would be reluctant at first to use such a technology (me included), but that would put you at such a disadvantage compared to those who would use it (job prospects, more time, etc), such that over time most people would end up using it regularly. They would argue it's not that different from sleep, and you would be hard pressed to prove that consciousness is even lost.
Arguably, it would become the norm to kill your consciousness and transfer your 'pattern' into a clone. It's not even that hard to imagine that reality as being absolutely mundane. This to me represents what is truly important to us, not 'qualia', but the continuity of a 'self' which benefits from past-me's work, and carries its thoughts, ideas, and desires into the future and potentially makes them into reality.
> Imagine a society where human teleportation is possible, and economically feasible. The one downside (which we the omnipotent storyteller know, but no in-world person can prove) is that it kills the 'you' in the qualia sense, and the person on the other side is the 'you' in every other sense (memories, desire, etc) except for qualia.
How can there be a discontinuity in the qualia of consciousness that individuals themselves cannot discern by examining their own internal experience? What does 'qualia' even mean in that context?
AI is way overhyped, our current machine learning results in parlor tricks at best. My car still can’t drive itself. Microsoft is still waiting for a compiler that can make their code better. Google is still waiting for an AI to answer the phone. Facebook is still trying to teach Zuckerberg emotions. Youtube’s MI still deletes some poor person’s channel every week for no discernible reason. But we shouldn’t feel bad, didn’t it take the daleks three million years to conquer stairs?