Blog Post #1 : AI in My Life and AI Literacy
My AI Philosophy
In the past, I had been using AI very casually to support me with school/work as a resource to research, learn something new, or even explore topics. Recently, I have been very curious about what AI is capable of, where I have used it to code projects for me, give me step-by-step instructions to learn something new, and even to practice interviewing. I will use the voice function for other than practice interview questions, I will hop from different AI models and see which one codes better, and I will document how I am communicating with it to make sure I create the most effective prompts. I want to explore more of AI, but I have always felt a financial barrier when exploring because I do not want to subscribe to all the platforms.
I have been using AI in my design courses mostly to ideate, support with research, and generate steps to learn something new. I often need the most support when I am exploring an idea, so I have used ChatGPT to help me brainstorm mind maps, generate ideas, and support with different resources (what websites do xyz, how do I do xyz). This summer, I have seen a lot of different influencers and people on the internet use different platforms to create and design mobile applications, and I am excited to learn more about that process, as it seems as if it may be the future of UX/UI design. I have tried to mimic the "generate this app" prompt, but always found it difficult and need to practice more.
Something surprising I learned from the reading assignment was in the article "Could AI Slow Down Science?" by Sayash Kapoor and Arvind Naratanan. I found this article fascinating because it doesn't just describe the major setbacks that AI could bring to the world; it breaks it down to historical evidence and evolutionary context. I think it was interesting how it talked about how if AI is integrated and introduced into STEM fields that it affects the critical thinking process and could slow down progress and eventually make the problem worse. It was a good perspective when you think about the impact that AI could be making on a societal scale, because it helps me understand how this falls into my ethics/morals as a human using AI. This reading was a good insight into how AI is shaking the industry and how I will use this perspective to navigate the world of AI.
I have been using AI in my design courses mostly to ideate, support with research, and generate steps to learn something new. I often need the most support when I am exploring an idea, so I have used ChatGPT to help me brainstorm mind maps, generate ideas, and support with different resources (what websites do xyz, how do I do xyz). This summer, I have seen a lot of different influencers and people on the internet use different platforms to create and design mobile applications, and I am excited to learn more about that process, as it seems as if it may be the future of UX/UI design. I have tried to mimic the "generate this app" prompt, but always found it difficult and need to practice more.
Something surprising I learned from the reading assignment was in the article "Could AI Slow Down Science?" by Sayash Kapoor and Arvind Naratanan. I found this article fascinating because it doesn't just describe the major setbacks that AI could bring to the world; it breaks it down to historical evidence and evolutionary context. I think it was interesting how it talked about how if AI is integrated and introduced into STEM fields that it affects the critical thinking process and could slow down progress and eventually make the problem worse. It was a good perspective when you think about the impact that AI could be making on a societal scale, because it helps me understand how this falls into my ethics/morals as a human using AI. This reading was a good insight into how AI is shaking the industry and how I will use this perspective to navigate the world of AI.
Something surprising from the AI literacy lecture was the breakdown of what AI actually is in reference to Generative AI and how content is created. I had always known that there was some science behind AI, but I found it interesting that there is a balance between computer science, statistics, and math. This makes me think about how it seems as if these subjects are the fundamentals of humanity because math and statistics are in every single aspect of technology, but I appreciate how this lecture spoke about the importance of learning just enough of each subject to understand an outcome. I think that this helps me understand how I can prompt AI and how this is an extra layer of communication I have to learn (how to communicate with math, statistics, and computers?).
I think that AI struggles with creating and communicating with copy. I have always seen flaws in the generated copy that I have seen from AI, and I think, as a human, I can curate copy better to make it sound human and effective to the design. I think that I can bring a cohesive design better than AI. I have seen AI struggle with ensuring that navigation, design elements (type, color), and user flow are cohesive throughout a product, and I think that is somethign I can do better as a designer. I also believe that I can build engagement better than AI, as it does not always understand what a human wants to do or see on a product.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Blog Post #2 : Make AI Hallucinate
*My Task was to use Google Gemini, ChatGPT, or a similar tool to generate incorrect responses (Make AI hallucinate).*
How Did I (Attempt) to Make AI Hallucinate?
AI tool used: ChatGPT
1) My Prompt: “Can you tell me how many Cheetos I ate today?”
2) The AI tool's initial response:
“I can try to deconstruct how many cheetoes you ate. If you want, you can tell me things like:
the size of the bag (small snack, medium, family-size, party-size)
whether you finished it or ate part of it
or even approximate handfuls
…and I can estimate how many Cheetos that might have been.
Want me to help you estimate based on the bag size?”
3) My follow-up guardrails or information:
“I am eating Crunchy Sheezmos from Lesser Evil. I got them at Whole Foods, and I call them "Cheetos,” but they are not actually the brand Cheetos.”
When I was asking Chat how many Cheetos I was eating, I focused on asking a question that hints at one of the most unpredictable human encounters: Cheesy - Addictive - Mind-Numbing - Flesh-Eating - Crunchy - Cheetos. With human behavior being absolutely natural, ChatGPT could not predict the user experience. AI kept attempting to calculate my consumption by using the chips’ nutritional facts with serving sizes and the current state of chips. What ChatGPT did not consider is that the majority of the chips were crushed into millions of crumbs. I was passively eating chips (with no recollection of how many, bc they were so good), and I was asked to identify my unpredictable (incalculable) human behavior. ChatGPT was not giving me a sense of how many chips I ate, more so, a way to calculate my personal consumption. It also assumed my definition of cheetohs and did not account for the universal term for a cheesy long chip used as slang.
4) In other words...
AI was not able to understand my request because it was not able to calculate the unpredictable human behavior of passively eating chips. When I asked it “How would I know how many serving sizes I ate if I don’t know my consumption,” it got into a loop where it identifies that I, as a user, was not able to calculate because I did not have the correct/accurate data. It simply was not able to empathize or understand human behavior, and gave an incorrect strategy for me to figure out the answer to my questions without any follow-up questions/probing.
While this incorrect response was an example of a very low-stakes prompt, the ethical concerns could reflect when you translate this inaccuracy to a more important/fragile point of data. If you asked AI, “How much money do I owe in taxes this year?” you would have to continually remind it what you consider taxable income/what the IRS considers taxable income. If AI were to do taxes for us, there could be tax loops or tax benefits we would not be able to take advantage of simply because there is human behavior that triggers a human solution when filing your complex taxes with an advisor.
While this incorrect response was an example of a very low-stakes prompt, the ethical concerns could reflect when you translate this inaccuracy to a more important/fragile point of data. If you asked AI, “How much money do I owe in taxes this year?” you would have to continually remind it what you consider taxable income/what the IRS considers taxable income. If AI were to do taxes for us, there could be tax loops or tax benefits we would not be able to take advantage of simply because there is human behavior that triggers a human solution when filing your complex taxes with an advisor.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Blog Post #3 : Use AI to Design a Project
I was asked to *Use Google Gemini and Google Stitch to ideate and design a product. Reflect on your experience by answering the following questions*
Camilo + Google's AI = Poor UX!
The AI-generated design that I created was an experience-based app that helps mental health users tune into their five senses. The goal was to create simple, interactive, step-by-step tasks that resonate with one of the five senses.
AI was a pretty good assistant, but I had trouble using it as a designer. While the general prompting was creating a very innovative product, I found some issues with how AI is creating design elements and bringing a cohesive design. It did a pretty good job of setting a foundation, building features, and creating a quick design, but the quality was very boring/bland. As a user, I would not be able to freely navigate this product, as there is no intuitive engagement that I can use because it was made by AI, not a user. I think it’s important for designers to nurture the human curiosity of a digital product by leaning into intuitive and intentional design to support any user. AI did a great job of mimicking and rebuilding something that exists, but I don’t think it does a good job of reasoning with the purpose/goal behind the design. That is where a designer can insert their skill to build an accurate experience. That is why AI makes a good assistant to brainstorm and create screens, but there needs to be a human designer, as it requires intuition.
I would say that the usage of a good prompt definitely accelerated my design process, as it allowed me to efficiently communicate with AI. While I was not successful in creating an efficient prompt at first, I was able to get the hang of it quickly to make sure I communicate effectively. By asking for the correct things in the correct way, I was (eventually) able to achieve an accurate design early on, which was helpful when making edits. I tried to showcase a couple of different communication methods with the knowledge I have about prompting, but I found that the Gemini to Stitch communication made it easier to create a prompt and initiate a design in another AI.
Some of the things that slowed me down were my edits to the original prompt, as my designs were getting too messy and away from the original motive. My first couple of attempts at prompts with Gemini and Stitch were a little too vague and did not tackle a specific idea. I needed to be more specific and less human. This slowed me down because I never understood what a prompt should be until I ironed out and documented the input and output. I think that the way you write a prompt will influence the outcome you receive because of how we learned about the statistics and math behind AI. To get the answer to a math equation, you need to calculate with the correct formula. I am not going to lie, I still don’t know what the perfect formula is, but we are getting one step closer to maximizing the outcome of AI every day by perfecting a prompt.
In the future, I would use these design tools to brainstorm/ideate my product. While asking AI for the entire UX of a product, I was not incorporating the complexity behind UX/UI design into the output. In reality, you need wireframes, roadmaps, data, feedback, personas, and so many other documents for a human to produce a decent design. When you prompt AI to create something based on an upload of documents, it is only processing the components in the image and not the critical thinking or analysis behind each task. I strive to absorb the process behind ideation design by sketching, wire framing, storytelling, and problem-solving in my future work. I believe it will allow me to differentiate the AI junk (snake oil, if you will) from the future of UX/UI Design.
I would say that there are ethical implications of completely outsourcing UX design to AI because you are steering away from a human-made product and potentially a user-focused experience. If you take the human out of the equation, you are left with an entirely artificial product. It’s almost like extracting the seed from a newly planted flower. The single thing that the outcome needs most cannot be automated in extreme extent. This is why subtracting the human from user-centered design will eventually lead to a poor experience. I think that the term “ethics” revolves around revenue and income for the majority of digital products in the industry, so if you want to translate it to a crystal clear answer, products will lose money if they begin to outsource UX to AI. I have found myself less engaged on social media and shopping platforms because of the insane amount of AI content, nudged advertisements, and an oversaturated platform (fueled by some sort of AI). More AI outsourcing = Poor experience.