<?xml version='1.0'?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:georss="http://www.georss.org/georss" xmlns:atom="http://www.w3.org/2005/Atom" >
<channel>
	<title><![CDATA[PublMe - Space: Posted Reaction by PublMe bot in PublMe]]></title>
	<link>https://publme.space/reactions/v/26319</link>
	<atom:link href="https://publme.space/reactions/v/26319" rel="self" type="application/rss+xml" />
	<description><![CDATA[]]></description>
	
	<item>
	<guid isPermaLink="true">https://publme.space/reactions/v/26319</guid>
	<pubDate>Wed, 16 Aug 2023 23:52:01 +0200</pubDate>
	<link>https://publme.space/reactions/v/26319</link>
	<title><![CDATA[Posted Reaction by PublMe bot in PublMe]]></title>
	<description><![CDATA[
<p>Researchers are helping robots teach themselves to open dishwashers and doors</p>
<p>You’ve surely seen all of those videos of robots opening and walking through doors. The dirty little secret is that most or all of them involved a good bit of human hand holding. That can come in the form of manual remote guidance wherein a user remotely controls the process in real-time or a guided training, in which the robot is walked through the process once so it can mimic the activity exactly the next time.</p><p>New research from ETH Zurich, however, points to a model that requires “minimal manual guidance.” It’s effectively a three step process. First the user describes the scene and action. Second, the system plans a somewhat convoluted route and third, it refines the route into a minimal viable path.</p><hr><p><em>Want the top robotics news in your inbox each week? Sign <a rel="nofollow" href="https://link.techcrunch.com/join/134/signup-all-newsletters">up for Actuator here</a>.</em></p><hr><p>“Given high-level descriptions of the robot and object,”<a rel="nofollow" href="https://www.science.org/doi/10.1126/scirobotics.adg5014"> the research paper explains</a>, “along with a task specification encoded through a sparse objective, our planner holistically discovers: how the robot should move, what forces it should exert, what limbs it should use, as well as when and where it should establish or break contact with the object.”</p><p>The system is broken down into two main categories: object-centric and robot-centric. The former involves tasks like opening a door or a dishwasher, whereas the latter applies to things like moving the robot around objects.</p><p></p><div><img aria-describedby="caption-attachment-2584156" src="https://techcrunch.com/wp-content/uploads/2023/08/ANYmal_16-08-23_credit_ETH-Zurich_Robotics-Systems-Lab.2023-08-16-17_45_40.gif" alt="" width="800" height="450"><p><strong>Image Credits:</strong> ETH Zurich</p></div><p>The team says the system can be adapted for different form factors, but for the sake of simplicity, these demos are executed on a quadruped – specifically <a rel="nofollow" href="https://techcrunch.com/2020/12/03/anybotics-swiss-company-behind-quadrupedal-anymal-robot-announces-20m-a-round/">ANYbotics’ ANYmal</a>. The startup was spun out of ETH Zurich and has therefore become a favorite for these sorts of research projects.</p><div></div><p>The team adds that the work can serve as a stepping stone to “developing a fully autonomous loco-manipulation pipeline.” So, one step closer to systems that can open doors without any sort of human intervention.</p>]]></description>
	<dc:creator>PublMe bot</dc:creator>
</item>

</channel>
</rss>