Hi Robot: Open-Ended Instruction Following with Hierarchical
                    Vision-Language-Action Models
                    
                
                    
                        Generalist robots that can perform a range of different tasks in open-world settings must be able to not only reason
                        about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback
                        during task execution. Intricate instructions (e.g., "Could you make me a vegetarian sandwich?" or "I don't like that
                        one") require not just the ability to physically perform the individual steps, but the ability to situate complex
                        commands and feedback in the physical world. In this work, we describe a system that uses vision-language models in a
                        hierarchical structure, first reasoning over complex prompts and user feedback to deduce the most appropriate next step
                        to fulfill the task, and then performing that step with low-level actions. In contrast to direct instruction following
                        methods that can fulfill simple commands ("pick up the cup"), our system can reason through complex prompts and
                        incorporate situated feedback during task execution ("that's not trash").
                        We evaluate our system across three robotic platforms, including single-arm, dual-arm, and dual-arm mobile robots,
                        demonstrating its ability to handle tasks such as cleaning messy tables, making sandwiches, and grocery shopping.