A.L.L.I.E Design Partner.
Simplicity and adaptability, a new dialogue for design.

Introducing,
A.L.L.I.E.

What is A.L.L.I.E?

This project is a conceptual AI design assistant in Adobe Illustrator, aiming to make the design process efficient, explore more, and help validate ideas. We followed a 7-week process, incorporating machine learning UX, user research, human-centered design, agile design thinking, wireframing, and prototyping.

Team

Justin Catalano

Anna Fang

Aaron Tran

Luke Witty

Anderson Zhou

My role

UX lead

Problem

We were asked

Where can Generative AI help within the Adobe ecosystem

What would it look like if it could be trained based on your styles, preferences, and approach?

How might that work and what could you do with it?

Research

So we researched

The next life-time users.
Based on 8 interviews with college students (18-25) who primarily use Adobe programs, the top pain points we found in their design process experience were with:

• Lack of time when designing.

• Struggling with exploring different ideas.

• Being confident if they were heading in the right direction.

The current state of Generative AI (March 2023)
After researching secondary resources, we found that Generative AI could be used to:

• Increase production through variation.

• Learn and generate from what it is given.

• Generate contextual feedback based on an input.

How would they feel?

Conceptualizing

Based on that,

• How might AI help explore, reduce time, and apply users ideas?

• How might AI improve users engagement when using Illustrator?

• How might the user acquire personalized guidance, support, and workflows with AI?

What is the user journey?

We created current and future state journey maps for Asset Generation and Feedback Experts to understand the experience and emotions of the user.

What is the experience?

Building a model of how the user and A.L.L.I.E. will interact with each other.

Design Process

Design Sprint 1

We used an iterative design process powered by user feedback after each of the 3 sprints.

We used our research and conceptualization phases to start drawing and wireframing, here are the highlights.

Our users didnt like this idea at all, they want to see:

• How the AI interacts back with you, where that is and what it looks like.

• An easier interaction.

• The UI more of an overlay and its own element.

Design Sprint 2

Using the feedback, we again explored something that would be fun to interact with, without losing our core functionality. We explored multiple wireframes and mid-fidelity mockups. Here is our process and an example of what these new wireframes looked it.

We received a better reaction, but there's more:

• Further the chat interaction.

• What does the motion look like?

• How would multiple functions work at the same time?

Design Sprint 3

The UI was settling in, which meant we can further integrate the experience within it.

It was time to get our final user testing, how do you feel now?

• Minor design details.

• Further develop Feedback Experts.

• Think throughly about the functionality for the prototype.

Meet, A.L.L.I.E.

We arrived to our final UI and built out how the interaction would work within Illustrator. You can view the how A.L.L.I.E would change during for each mode by hovering over an assistant.

Asset Generation active based on input questions, pulls generation from user data

Vector Generation, generates manipulatable vector based on the input, four variations

Shape Generation, generates any shape and can be manipulated, four variations

Color Generation, four different color variations of work in art board

Ff

Font Generation, four different font variations related to what the user asked

Image Generation, generates four different images based on user input

Composition Generation, takes whatever is on the art board and re-arranges, four variations

History, shows everything generated

Experts active based on input questions, pulls from accessibility and trusted resources

Suggestion Expert, watches your workflow and gives advice on how to better it

Design Principle Expert, evaluates design and shows how principles can be used

Audience Expert, predicts how a certain group of viewers will interpret design

Accessibility Expert, explains how to make a more accessible design

Compositional Expert, provides guidance and feedback on the current composition

History, shows everything generated

Concept Prototype

Here is the prototype created using Figma. It has a tutorial to teach the AI tools, and then follows a design process going from a blank canvas to a full design, then getting validation. FYI: Because this is more of a storytelling prototype, use the yellow bar to progress through. You can select different generations, but only one of them will move you onto the next step.

Behind the scenes.

Conclusion

Our next steps would be

• Would like to be able to deselect the tools being used to remove elements

• Have the option to manually switch between feedback and asset generation

• Create deeper interactions within functions that use non-Allie features.

Final thoughts

This project pushed my understanding of making conceptual interactions. Since we didn't make an actual A.I., it was interesting to explore what was possible since there were no limits. It was great working with Jason and his help in directing us towards a solution. We all had a great time and spent many hours together. It was a great time and very fun!

Designed by Justin Catalano

DuneSystems

2024 ©