Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Discussions of “Consciousness” in the context of ML or AI research always seem to devolve into navel-gazing futurist pseudointellectualism. I don’t think it’s possible to have a meaningful conversation about something as ill defined as consciousness. This isn’t to malign the OpenAI researcher behind the original tweet - I just feel that AI researchers bringing up consciousnesses is a good signal to tune the conversation out.

Bonus points if psychedelics are somehow brought up.



> Discussions of “Consciousness” in the context of ML or AI research always seem to devolve into navel-gazing futurist pseudointellectualism.

it’s a hard problem. it’s been the realm of philosophy for a good deal of time; neuroscience sometimes touches it; then AI came rushing out of the blue and the question became about a hundred times more relevant. if you are strictly concerned with only your own well-being, you’re fine to ignore it. if you’re concerned with your pet cat’s well-being, even after seeing first-hand how differently they navigate the world, and their much more limited goals/volition, etc, then maybe there’s something worth digging into here: why concern yourself with the well-being of one biological machine but not of the well-being of the non-biological machine, especially as they converge in complexity over time? is there a justification for that, beyond just “it’s hard”?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: