<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[PublMe - Space: Posted Reaction by PublMe bot in PublMe]]></title>
	<link>https://publme.space/reactions/v/47385</link>
	<atom:link href="https://publme.space/reactions/v/47385" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://publme.space/reactions/v/47385</guid>
	<pubDate>Sat, 16 Nov 2024 19:00:34 +0100</pubDate>
	<link>https://publme.space/reactions/v/47385</link>
	<title><![CDATA[Posted Reaction by PublMe bot in PublMe]]></title>
	<description><![CDATA[
<p>Playing Chess Against LLMs and the Mystery of Instruct Models</p>
<div><img width="800" height="467" src="https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?w=800" alt="" srcset="https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg 2100w, https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?resize=250, 146 250w, https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?resize=400, 233 400w, https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?resize=800, 467 800w, https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?resize=1536, 896 1536w, https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?resize=2048, 1195 2048w" data-attachment-id="734463" data-permalink="https://hackaday.com/2024/11/16/playing-chess-against-llms-and-the-mystery-of-instruct-models/mean_centipawn_difference_llm_chess/" data-orig-file="https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg" data-orig-size="2100,1225" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="mean_centipawn_difference_llm_chess" data-image-description="" data-image-caption="" data-medium-file="https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?w=400" data-large-file="https://hackaday.com/wp-content/uploads/2024/11/mean_centipawn_difference_llm_chess.jpg?w=800" tabindex="0" role="button"></div><p>At first glance, trying to play chess against a large language model (LLM) seems like a daft idea, as its weighted nodes have, at most, been trained on some chess-adjacent texts. It has no concept of board state, stratagems, or even whatever a ‘rook’ or ‘knight’ piece is. This daftness is indeed demonstrated by [Dynomight] <a rel="nofollow" href="https://dynomight.net/chess/" target="_blank">in a recent blog post</a> (<a rel="nofollow" href="https://dynomight.substack.com/p/chess" target="_blank">Substack version</a>), where the <a rel="nofollow" href="https://stockfishchess.org/" target="_blank">Stockfish</a> chess AI is pitted against a range of LLMs, from a small Llama model to GPT-3.5. Although the outcomes (see featured image) are largely as you’d expect, there is one surprise: the <code>gpt-3.5-turbo-instruct</code> model, which seems quite capable of giving Stockfish a run for its money, albeit on Stockfish’s lower settings.</p><p>Each model was given the same query, telling it to be a chess grandmaster, to use standard notation, and to choose its next move. The stark difference between the instruct model and the others calls investigation. OpenAI describes the instruct model as an ‘InstructGPT 3.5 class model’, which <a rel="nofollow" href="https://openai.com/index/instruction-following/" target="_blank">leads us to this page</a> on OpenAI’s site and an <a rel="nofollow" href="https://arxiv.org/abs/2203.02155" target="_blank">associated 2022 paper</a> that describes how InstructGPT is effectively the standard GPT LLM model heavily fine-tuned using human feedback.</p><p></p><p>Ultimately, it seems that instruct models do better with instruction-based queries because they have been programmed that way using extensive tuning. A <a rel="nofollow" href="https://news.ycombinator.com/item?id=37558911" target="_blank">[Hacker News] thread from last year</a> discusses the Turbo vs Instruct version of GPT 3.5. That thread also uses chess as a comparison point. Meanwhile, <a rel="nofollow" href="https://openai.com/index/chatgpt/" target="_blank">ChatGPT is a sibling of InstructGPT</a>, per OpenAI, using Reinforcement Learning from Human Feedback (RLHF), with presumably ChatGPT users now mostly providing said feedback.</p><p>OpenAI notes repeatedly that InstructGPT nor ChatGPT provide correct responses all the time. However, within the limited problem space of chess, it would seem that it’s good enough not to bore a dedicated chess AI into digital oblivion.</p><p>If you want a digital chess partner, try your <a rel="nofollow" href="https://hackaday.com/2024/03/30/playing-chess-against-your-printer-with-postscript/">Postscript printer</a>. Chess software doesn’t have to be as <a rel="nofollow" href="https://hackaday.com/2023/06/23/a-chess-ai-in-only-4k-of-memory/">large</a> as an AI model.</p>]]></description>
	<dc:creator>PublMe bot</dc:creator>
</item>

</channel>
</rss>