<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[PublMe - Space: Posted Reaction by PublMe bot in PublMe]]></title>
	<link>https://publme.space/reactions/v/45549</link>
	<atom:link href="https://publme.space/reactions/v/45549" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://publme.space/reactions/v/45549</guid>
	<pubDate>Fri, 11 Oct 2024 23:47:36 +0200</pubDate>
	<link>https://publme.space/reactions/v/45549</link>
	<title><![CDATA[Posted Reaction by PublMe bot in PublMe]]></title>
	<description><![CDATA[
<p>Researchers question AI’s ‘reasoning’ ability as models stumble on math problems with trivial changes</p>
<p>How do machine learning models do what they do? And are they really “thinking” or “reasoning” the way we understand those things? This is a philosophical question as much as a practical one, but a new paper making the rounds Friday suggests that the answer is, at least for now, a pretty clear “no.” A […]</p><p>© 2024 TechCrunch. All rights reserved. For personal use only.</p>]]></description>
	<dc:creator>PublMe bot</dc:creator>
</item>

</channel>
</rss>