AI-Powered IDE: a revamp of an AI-driven Dev support platform.
- Tamar Schori
- 5 days ago
- 10 min read
Updated: 3 days ago
How I evolved an AI-powered IDE by clarifying workflows and restructuring the UX to reduce developer confusion and build trust in automation

My role: As a Toptal talent, I was brought in as lead product designer to evolve 'OpenHands' from a niche, research-oriented IDE into a more accessible, structured platform suitable for a wider range of developers. Collaborating closely with the founder, a senior developer, and an AI engineer, I redefined the system architecture and interface logic to reduce friction, clarify workflows, and align AI assistance with the expectations and habits of everyday development teams.
Methods:
My mandate included:
Conducting a UX audit and product walkthroughs
Redefining mental models for human–AI collaboration in developer tooling
Designing and validating new information architecture and user flows
Clarifying 'OpenHands' core value proposition
Prototyping a unified interface that embeds AI agents across code, test, and deploy flows
Tools: Figma, Miro, UX Canvases, Competitive Analysis Grids
Timestamp: 2025 | 2 month
Problem
In a fast-evolving market where AI-powered developer tools are becoming more open, integrated, and privacy-conscious, 'OpenHands' aimed to deliver a forward-thinking, open-source IDE. However, the initial product blurred key system zones—merging AI-supported suggestions, developer input, and output feedback into a single undifferentiated space. This lack of structure increased cognitive load, made task orchestration unclear, and led to a disjointed user experience that conflicted with developers’ expectations of clarity, control, and modularity. About the product
'OpenHands' is an AI-augmented developer environment that integrates intelligent task automation into the software development lifecycle. Rather than functioning as a traditional code editor or an agent-run IDE, it offers a conversational, Git-connected workspace where developers can request support across tasks such as code authoring, testing, deployment, and documentation, without needing to switch tools or contexts.
The interface is designed around a collaborative UX model: developers initiate tasks in natural language, and the system returns actionable outputs, ranging from pull requests to environment configurations, within a unified, chat-led interface. These outputs are auditable, reversible, and version-controlled, ensuring developers retain full ownership and oversight.
In essence, 'OpenHands' reframes software tasks as accessible, persistent entities, available on demand, traceable through Git, and embedded within familiar developer workflows. It reduces friction not by replacing developers with agents, but by empowering them to orchestrate AI-assisted tasks with confidence and control.
Process

I model my process on the Double Diamond, balancing discovery with delivery. I guide stakeholders through ambiguity (problem finding) before entering solution mode. This ensures we’re solving the right problem—at the right fidelity—for the right users.
Diamond one: finding the right problem to solve

Competitor Analysis - Discovered Market Trends:
Increased Adoption: The adoption of AI-powered developer tools is expected to grow significantly in the coming years as developers recognize their benefits.
Focus on Open Source: There is a growing trend towards open-source AI tools for developers, which can promote innovation and collaboration.
Integration with Existing Tools: AI tools are increasingly being integrated with existing development tools and platforms, making them easier to use and adopt.
Emphasis on Security and Privacy: As AI tools become more powerful, there is a growing emphasis on ensuring their security and privacy.
Overall, the AI + developer productivity market is a dynamic and rapidly evolving space with the potential to significantly impact the way software is developed. By automating repetitive tasks, improving code quality, and enhancing the developer experience, AI tools can help developers be more productive, efficient, and creative.

User Research Approach:
In a lean startup context, I replaced formal user interviews with academic literature reviews that offered data-driven insight into developer behavior and expectations. These were validated through online comments and quick guerrilla interviews within the startup’s ecosystem.
Academic overview
>> Understanding User Mental Models in AI-Driven Code Completion Tools: Insights from an Elicitation Study Read Online
This study investigates how developers form and adapt mental models when using AI-driven code assistants (like Copilot). It highlights several key findings:
Mismatch in Expectations: Developers often begin with flawed or incomplete assumptions about how AI code tools work, leading to confusion or misalignment between expected and actual behavior.
Trust and Predictability: When the assistant’s behavior doesn’t match the developer’s mental model, it erodes trust. Predictable, explainable behavior increases perceived usefulness.
Gradual Model Refinement: Over time, users adjust their mental models through repeated use, trial and error, and environmental cues (e.g., documentation, UI feedback).
Need for Transparency: Developers want clear signals about what the AI knows, why it makes suggestions, and how to control or override them.
Design Implication: Interfaces for AI coding tools should scaffold user understanding by surfacing system capabilities and limitations, offering contextual explanations, and aligning closely with developers’ existing workflows and logic.
On Transparency and User Trust:
“It has been proven that transparency and interpretability are essential for establishing user trust, and understanding the user’s mental model is essential to making users comfortable when adopting the tools.”Section 2. Rationale and Background
On the Design Implication for AI Interfaces:
“A tool designed considering users’ mental models might allow developers to know when the AI might make contextually relevant suggestions and when it can blunder... to keep the coding flow in hand and avoid wasting time for distraction.”Section 2. Rationale and Background
>> A New Generation of Intelligent Development Environments Read Online
A recent study on AI-powered code completion highlights the critical risk of mismatched mental models. Developers often struggle when the system’s suggestions don’t align with how they believe the tool operates. Read online.
On Transparency and User Trust:
“It has been proven that transparency and interpretability are essential for establishing user trust, and understanding the user’s mental model is essential to making users comfortable when adopting the tools.”Section 2. Rationale and Background
On the Design Implication for AI Interfaces:
“A tool designed considering users’ mental models might allow developers to know when the AI might make contextually relevant suggestions and when it can blunder... to keep the coding flow in hand and avoid wasting time for distraction.”Section 2. Rationale and Background
These findings reinforce why it was essential for 'OpenHands' to clarify AI-assisted workflows and tool behaviors—ensuring developers can anticipate, understand, and trust the system at every step.
Contextualized Online comments:
>> User Feedback on Fragmentation & Confusion - Read online
“OpenHands currently maintains two separate configuration systems that create confusion, duplication, and bugs”
Feedback from the docs reveals how inconsistent, overlapping settings reduce predictability and increase cognitive load.
>> User Feedback on the need for transparency - Read online
“Users see action before understanding the reasoning … Decision making: Harder to make informed decisions about whether to proceed”
This GitHub issue highlights how the interface triggers actions before clarity is established, leaving users unsure whether to trust or follow-up on AI-generated changes.
This approach surfaced key friction points:
Developers were unclear whether the AI was acting as a helper or an executor.
Feedback loops between user input and system output were ambiguous.
Core workflows, coding, testing, and deployment, felt disjointed due to an overly nested interface that blurred system zones and obscured process continuity.
Problem Framing:




'OpenHands' currently attracts highly technical users, power users, cautious developers, and advanced engineers, alongside a secondary group of capable developers navigating multi-step workflows. While these users are fluent in complex tooling, they still encounter friction at the interaction level that can hinder sustained engagement.
UX Audit:
The initial product blurred system zones—mixing AI input, developer instructions, and output review in a single pane. This increased cognitive load and made the IDE feel unpredictable.

UX Audit Summary: Clarity, Transparency & Developer Control
The current 'OpenHands' platform reveals a significant disconnect between the developer’s expected mental model of an IDE and the way the interface is structured.
What it looks like now:
A. Menu Bar Misalignment
The top-level menu fails to surface the platform's primary developer actions. Instead, it hosts a flat, non-hierarchical list of secondary features like "Academy," "Support," and "File Management." This lack of information architecture creates ambiguity, forcing users to search for core functions.
By aligning the menu bar with the platform's primary developer actions, we can enhance the user experience, making it more intuitive and efficient. This strategic change will uphold the principle of transparent discoverability, ensuring that developers can immediately understand the available actions and where to begin.
B. Over-Nested Workspace Architecture
The current interface exposes too many workspace layers at once, such as Workspace, Jupyter, App, and Browser, regardless of task context. This design flaw significantly increases cognitive load and introduces friction in task switching, which can be frustrating for users. Progressive disclosure is a design strategy that involves revealing information gradually, based on user actions or needs. This can help manage complexity and reduce cognitive load.
Strategic Implication: To support controlled focus, only relevant panels should surface per context. Nested complexity must be simplified through progressive disclosure.
C. Unnecessary Terminal Visibility
The terminal is persistently shown, even when not required for the current workflow. This contributes to visual clutter and risks overwhelming less technical users.
Strategic Implication: UX should allow users to summon complexity when needed, not be forced to manage it at all times.
D. Scattered Actions, No Logical Grouping
Core actions like run, build, deploy, and test are currently scattered across different sections of the interface without clear categorization or flow. This lack of semantic grouping undermines learnability and system predictability, making it difficult for users to navigate the platform.
Strategic Implication: Without a transparent system model, users struggle to form accurate mental models. A consistent logic must underlie UI zones and commands to foster trust and ease of use.
What it should look like:
A clear separation of zones: authoring (code), collaborating (chat/guidance), and system state (logs, terminal, CI/CD feedback)
A progressive interface: showing only what's relevant at each stage of the development lifecycle
A task-centered navigation model: reflecting how developers think, “What am I trying to build?”, not “Which internal module am I viewing?”
Where’s your persona?
Where’s a JTBD or problem statement canvas?
No problem prioritization (e.g., dot voting, pain vs. frequency matrix).
No sign that you iterated on your problem framing
Diamond one: finding the right problem to solve

AI narratives - Competitive analysis as a source for inspiration
Devin, the AI software engineer*, helped inspire a conceptual shift in 'OpenHands':
1. Reframing development tasks as approachable, persistent entities: always on, context-aware, and just a command away. * Devin can be used within Slack, it integrates directly into your Slack workspace. Once your organization enables the Slack integration, users can trigger Devin by tagging @Devin in any channel or thread. Devin responds inline and continues the session interactively, just like it does in the standalone app

2. While Devin operates with structured output logs and ticketing updates, 'OpenHands' embeds its AI logic directly into the developer's Git workflow, ensuring progress is tracked in the same place developers already use to collaborate and control quality. My inspiration from Devin hinges on improving transparency within the current 'OpenHands' IDE.

Problem Framing
'OpenHands' targets highly skilled users, power users, cautious developers, and advanced full-stack engineers, but user feedback showed that basic UX issues are creating unnecessary friction, which must be resolved before deeper AI mental model and anticipation challenges can be addressed.
Rather than redesigning the product from scratch, the focus shifted to quick wins: surface hidden functionality, reduce interface clutter, and clarify core workflow zones. These foundational improvements are essential because advanced users already tolerate setup complexity, but even they struggle with UI-level confusions that erode trust.
A recent GitHub issue puts it plainly:
“Agent thoughts are displayed after the suggested action rather than before, which creates a confusing user experience.” Github issues
This highlights the need to better align UI feedback with user expectations. By addressing these low-level UX breakdowns, we enable developers to confidently collaborate with AI, without cognitive overload.
UX Strategy
Reimagine the IDE as a developer-first, AI-enhanced workspace, less a traditional editor, more a collaborative environment that understands intent. 'OpenHands' unifies code authoring, testing, and deployment into a seamless, guided flow. Rather than navigating fragmented panes and nested structures, developers are met with a system built for clarity, momentum, and control.
Informed by academic UX research, the strategy prioritizes three core principles:
Transparency: All AI-generated suggestions, test scaffolding, and deployment steps are clearly surfaced and traceable, helping developers understand what’s happening and why.
Control: Developers remain in charge, every automated action is reversible, editable, and integrated into familiar workflows.
Confidence: Through conversational onboarding and contextual feedback (e.g., real-time CI/CD indicators), as well as real-time monitoring of processes, the system builds trust without overpromising or obscuring logic.
A modular layout supports this vision: an Activity Bar anchors the workflow; customizable sidebars surface contextual tools; and embedded status indicators keep users informed without disruption. From code prompts to deployment, every action respects the developer’s rhythm, streamlining the journey from idea to production without sacrificing agency or clarity.
Diamond two: finding the right solution

User Journey:
Ideation Workshops
Worked with founders to define:
A modular architecture
Seamless code-to-deploy interaction logic
In-editor CI/CD visibility
A clearer division between task input, orchestration, and output
Diamond two: finding the right solution

Converging on a solution
I identified that 'OpenHands'’ core issue was a missing or incoherent mental model. Developers couldn’t grasp how AI-supported tasks fit into the development flow or how the interface aligned with Git-based conventions. I proposed a deep IA revamp to replace the over-nested layout, clarified task orchestration using AI UX patterns, and aligned microcopy and flow with UX-for-developers best practices. The solution aligned with the founder’s vision and was validated through qualitative testing with developers.
Information Architecture Redesign
Redesigned the interface to separate:
Authoring space for writing code
Collaboration space for AI-powered task assistance
Contextual tools for testing, deployment, and repo visibility
Interaction Design & Prototyping
Created low- and high-fidelity mockups that included:
Introduced a conversational onboarding experience
Designed an Activity Bar for smoother context switching
Built sandboxed endpoints for safe testing of AI-assisted outputs
UX/UI iterations Grey temperature selection
Accent color selection
AI Integration
Defined the UX for AI-assisted development tasks, ensuring interactions aligned with developer workflows and mental models.
Code suggestions, test path generation, and deployment logic integrated directly into working flows
Developers could review, edit, or discard outputs, maintaining control and trust
All outputs were explainable, reversible, and traceable, anchored in Git workflows
To sum it up
Key Challenges & Trade-Offs
Fragmentation vs. Flow: Unifying the experience without overwhelming the user
Task Logic Clarity: Making assisted steps transparent and anticipatable
Trust & Oversight: Supporting control while maintaining the benefits of assistance
Results
Refined Mental Model: Developers could now understand system behavior and supported actions
Modular Architecture: Zones aligned with development stages, author, review, deploy
Guided Onboarding: Lowered friction for first-time use
Transparent Assistance: Outputs were editable, auditable, and trustworthy
Strategic Readiness: Redesign positioned 'OpenHands' for confident MVP rollout
User Feedback
Stakeholders and developers reported greater clarity, confidence, and usability. The system evolved from feeling opaque and overly complex to behaving like a supportive, integrated workspace.