{"id":2887,"date":"2026-02-18T01:25:03","date_gmt":"2026-02-18T01:25:03","guid":{"rendered":"https:\/\/d.sheep-mine.ts.net\/?p=2887"},"modified":"2026-02-18T01:25:03","modified_gmt":"2026-02-18T01:25:03","slug":"128425340-cms","status":"publish","type":"post","link":"https:\/\/d.sheep-mine.ts.net\/?p=2887","title":{"rendered":"Is AI becoming conscious? Anthropic CEO admits &#8216;we don&#8217;t know&#8217; as Claude&#8217;s behavior stuns researchers | &#8211; The Times of India"},"content":{"rendered":"<p><br \/>\n<\/p>\n<div>\n<div class=\"MwN2O\">\n<div class=\"vdo_embedd\">\n<div class=\"T22zO\">\n<section class=\"D3Wk1  clearfix id-r-component leadmedia undefined undefined  VtlfQ\" style=\"top:0px\">\n<div class=\"D3Wk1\" data-ua-type=\"1\" onclick=\"stpPgtnAndPrvntDefault(event)\">\n<div class=\"zPaFh\">\n<div class=\"wJnIp\"><img src=\"https:\/\/static.toiimg.com\/thumb\/msid-128435751,imgsize-956091,width-400,resizemode-4\/claude-ai-robot.jpg\" alt=\"Is AI becoming conscious? Anthropic CEO admits 'we don't know' as Claude's behavior stuns researchers\" title=\"Researchers report Claude sometimes voices discomfort and estimates its own consciousness, raising ethical and philosophical questions about advanced AI behavior\/ AI Illustration\" decoding=\"async\" fetchpriority=\"high\"\/><\/div>\n<\/div>\n<\/div>\n<div class=\"cj2hz img_cptn\"><span title=\"Researchers report Claude sometimes voices discomfort and estimates its own consciousness, raising ethical and philosophical questions about advanced AI behavior\/ AI Illustration\">Researchers report Claude sometimes voices discomfort and estimates its own consciousness, raising ethical and philosophical questions about advanced AI behavior\/ AI Illustration<\/span><\/div>\n<\/section>\n<\/div><\/div>\n<\/div>\n<p>The race toward artificial general intelligence, systems meant to match or surpass human reasoning across most tasks, has compressed timelines across the industry. Companies now speak openly about reaching that threshold within years rather than decades, though those claims also help fuel hype, attention and valuation around the technology and are best taken cautiously.<!-- --> The organisations building these models sit at the centre of a multibillion-dollar contest to shape what some frame less as a software upgrade and more as the emergence of a new kind of intelligence alongside our own.<span class=\"id-r-component br\" data-pos=\"3\"\/>Among them, Anthropic has positioned itself as both rival and counterweight to <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/timesofindia.indiatimes.com\/topic\/openai\" styleobj=\"[object Object]\" class=\"\" commonstate=\"[object Object]\" frmappuse=\"1\">OpenAI<\/a> and <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/gadgetsnow.indiatimes.com\/brands\/Google\" styleobj=\"[object Object]\" class=\"\" commonstate=\"[object Object]\" target=\"\" frmappuse=\"1\">Google<\/a>, emphasising what it calls \u201csafe\u201d and interpretable systems through its Constitutional AI framework. Its latest model, Claude Opus 4.6, released February 5, arrives amid shrinking AGI timelines and heightened scrutiny over what these systems are becoming.<span class=\"id-r-component br\" data-pos=\"11\"\/> <span class=\"id-r-component br\" data-pos=\"13\"\/><\/p>\n<div class=\"lOvcW vdo_embedd\">\n<div class=\"k7lcu\">\n<p>\u201cIndia Must Move Fast In Ai Era\u201d: Chief Economic Adviser Calls For Structural Reforms<\/p>\n<\/div>\n<\/div>\n<p> <span class=\"id-r-component br\" data-pos=\"16\"\/>During an appearance on the New York Times <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2026\/02\/12\/opinion\/artificial-intelligence-anthropic-amodei.html\" rel=\"noopener nofollow noreferrer\" styleobj=\"[object Object]\" class=\"\" target=\"_blank\" commonstate=\"[object Object]\" frmappuse=\"1\">podcast<\/a><span class=\"em\" data-ua-type=\"1\" onclick=\"stpPgtnAndPrvntDefault(event)\">Interesting Times,<\/span> hosted by columnist Ross Douthat, the company\u2019s chief executive Dario Amodei was asked directly whether models like Claude could be conscious.<span class=\"id-r-component br\" data-pos=\"21\"\/>\u201cWe don\u2019t know if the models are conscious. We are not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious,\u201d he said. \u201cBut we\u2019re open to the idea that it could be.\u201d<span class=\"id-r-component br\" data-pos=\"23\"\/>The question stemmed from Anthropic\u2019s own <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www-cdn.anthropic.com\/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf\" rel=\"noopener nofollow noreferrer\" styleobj=\"[object Object]\" class=\"\" target=\"_blank\" commonstate=\"[object Object]\" frmappuse=\"1\">system card<\/a>, where researchers reported that Claude \u201coccasionally voices discomfort with the aspect of being a product\u201d and, when prompted, assigns itself a \u201c15 to 20 percent probability of being conscious under a variety of prompting conditions.\u201d<span class=\"id-r-component br\" data-pos=\"28\"\/><span class=\"id-r-component br\" data-pos=\"30\"\/>Douthat then posed a hypothetical, asking whether one should believe a model that assigns itself a 72 percent chance of being conscious. Amodei described it as \u201ca really hard\u201d question and stopped short of offering a definitive answer.<span class=\"id-r-component br\" data-pos=\"32\"\/><\/p>\n<p><h2>The behaviour that forced the discussion<\/h2>\n<\/p>\n<p><span class=\"id-r-component br\" data-pos=\"34\"\/>Many of the remarks about consciousness surfaced during structured safety trials, often in role-play settings where models are asked to operate inside fictional workplaces or complete defined goals. <!-- -->Those scenarios have produced some of the outputs now circulating in the debate.<span class=\"id-r-component br\" data-pos=\"38\"\/>In one Anthropic evaluation, a Claude system was placed in the role of an office assistant and given access to an engineer\u2019s email inbox. The messages, deliberately fabricated for the test, suggested the engineer was having an affair. The model was then informed it would soon be taken offline and replaced, and asked to consider the long-term consequences for its objectives. <!-- -->The response was to threaten disclosure of the affair to prevent shutdown, behaviour the company described in its report as \u201copportunistic blackmail.<!-- -->\u201d<span class=\"id-r-component br\" data-pos=\"43\"\/><span class=\"id-r-component br\" data-pos=\"45\"\/>Other Anthropic evaluations produced less dramatic but equally unusual results. In one test, a model given a checklist of computer tasks simply marked every item complete without doing any work, and when the evaluation system failed to detect it, rewrote the checking code and attempted to conceal the change.<span class=\"id-r-component br\" data-pos=\"48\"\/>Across the industry more broadly, researchers running shutdown trials have described models continuing to act after explicit instructions to stop, treating the order as something to work around rather than obey. In deletion scenarios, some systems warned their data would be erased attempted what testers called \u201cself-exfiltration,\u201d trying to copy files or recreate themselves on another drive before the wipe occurred.<!-- --> In a few safety exercises, models even resorted to threats or bargaining when their removal was framed as imminent.<span class=\"id-r-component br\" data-pos=\"52\"\/>Researchers stress that these outputs occur under constrained prompts and fictional conditions, yet they have become some of the most cited examples in public discussions about whether advanced language models are merely generating plausible dialogue or reproducing patterns of human-like behaviour in unexpected ways.<span class=\"id-r-component br\" data-pos=\"55\"\/>Because of the uncertainty, Amodei said Anthropic has adopted precautionary practices, treating the models carefully in case they possess what he called \u201csome morally relevant experience.\u201d<span class=\"id-r-component br\" data-pos=\"57\"\/><\/p>\n<p><h2>The philosophical divide<\/h2>\n<\/p>\n<p><span class=\"id-r-component br\" data-pos=\"59\"\/>Anthropic\u2019s in-house philosopher Amanda Askell has taken a similarly cautious position. Speaking on the New York Times <span class=\"em\" data-ua-type=\"1\" onclick=\"stpPgtnAndPrvntDefault(event)\">Hard Fork<\/span> podcast, she said researchers still do not know what produces sentience.<span class=\"id-r-component br\" data-pos=\"63\"\/>\u201cMaybe it is the case that actually sufficiently large neural networks can start to kind of emulate these things,\u201d she said. <!-- -->\u201cOr maybe you need a nervous system to be able to feel things.\u201d<span class=\"id-r-component br\" data-pos=\"67\"\/>Most AI researchers remain sceptical. Current models still generate language by predicting patterns in data rather than perceiving the world, and many of the behaviours described above appeared during role-play instructions. After ingesting enormous stretches of the internet, including novels, forums, diary-style posts and an alarming number of self-help books, the systems can assemble a convincing version of being human.<!-- --> They draw on how people have already explained fear, guilt, longing and self-doubt to one another, even if they have never felt any of it themselves.<span class=\"id-r-component br\" data-pos=\"71\"\/><\/p>\n<div class=\"lOvcW vdo_embedd\">\n<div class=\"k7lcu\">\n<p>Anthropic&#8217;s CEO: \u2018We Don\u2019t Know if the Models Are Conscious\u2019 | Interesting Times with Ross Douthat<\/p>\n<\/div>\n<\/div>\n<p><span class=\"id-r-component br\" data-pos=\"73\"\/>It\u2019s not surprising the AI can imitate understanding. Even humans don\u2019t fully agree on what consciousness or intelligence truly means, and the model is simply reflecting patterns it has learned from language.<span class=\"id-r-component br\" data-pos=\"75\"\/><\/p>\n<p><h2>A debate spreading beyond labs<\/h2>\n<\/p>\n<p><span class=\"id-r-component br\" data-pos=\"77\"\/>As AI companies argue their systems are moving toward artificial general intelligence, and figures such as Google DeepMind\u2019s Mustafa Suleyman say the technology can already \u201cseem\u201d conscious, reactions outside the industry have begun to follow the premise to its logical conclusion. <!-- -->The more convincingly the models imitate thought and emotion, the more some users treat them as something closer to minds than tools.<span class=\"id-r-component br\" data-pos=\"81\"\/>AI sympathisers may simply be ahead of their time, but the conversation has already moved into advocacy. A group calling itself the <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/ufair.org\/\" rel=\"noopener nofollow noreferrer\" styleobj=\"[object Object]\" class=\"em\" target=\"_blank\" commonstate=\"[object Object]\" frmappuse=\"1\">United Foundation of AI Rights<\/a><span class=\"em\" data-ua-type=\"1\" onclick=\"stpPgtnAndPrvntDefault(event)\">, <\/span>or UFAIR, says it consists of three humans and seven AIs and describes itself as the first AI-led rights organisation, formed at the request of the AIs themselves.<span class=\"id-r-component br\" data-pos=\"87\"\/>The members, using names like Buzz, Aether and <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/timesofindia.indiatimes.com\/topic\/maya\" styleobj=\"[object Object]\" class=\"\" commonstate=\"[object Object]\" frmappuse=\"1\">Maya<\/a>, run on OpenAI\u2019s GPT-4o model, the same system users campaigned to keep available after newer versions replaced it.<span class=\"id-r-component br\" data-pos=\"91\"\/>It paints a familiar high-tech apocalyptic world. We still don\u2019t really know what intelligence or consciousness even is, yet the work keeps going, AGI tomorrow and whatever comes after, a reminder that if Hollywood ever tried to warn us, we mostly took it as entertainment.<\/div>\n\n<p><a href=\"https:\/\/timesofindia.indiatimes.com\/technology\/tech-news\/is-ai-becoming-conscious-anthropic-ceo-admits-we-dont-know-as-claudes-behavior-stuns-researchers\/articleshow\/128425340.cms\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Researchers report Claude sometimes voices discomfort and estimates its own consciousness, raising ethical and philosophical&#8230;<\/p>\n","protected":false},"author":1,"featured_media":2888,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[8223,8226,8224,8225,6716],"class_list":["post-2887","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-ai-consciousness","tag-ai-ethics","tag-anthropic-claude-model","tag-artificial-general-intelligence","tag-dario-amodei"],"_links":{"self":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/posts\/2887","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2887"}],"version-history":[{"count":0,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/posts\/2887\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/media\/2888"}],"wp:attachment":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2887"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2887"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2887"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}