Der Weg zum Führerschein ist m...
Der Weg zum Führerschein ist mühsam und mittlerweile richtig teuer. Zahllose Theoriefragen - und am Ende stehen viele Neulinge mit leeren Händen da. Kein Wunder, dass immer mehr Fahrschüler zum Betrüger werden. Und Kriminelle machen damit Kasse.
The one thing your team needs to agree on before prompting any AI toolHave you seen that meme with the guy pointing at a wall covered in papers, pictures, and strings and conspiracy theories? ...
The one thing your team needs to agree on before prompting any AI toolHave you seen that meme with the guy pointing at a wall covered in papers, pictures, and strings and conspiracy theories? That was me when ChatGPT came out, pointing at a screen saying “it’s just clusters and vectors, it’s not magic.” Nobody cared. Yet.This is what it looks like when you understand something before everyone else does. Or at least that’s what it feels like. Charlie Kelly from It’s Always Sunny in Philadelphia, FX Networks. Used for editorial and illustrative purposes.Back in 2020 I was working at a startup that collected, processed, augmented and delivered insights about points of interest, physical locations and all the data you can attach to them. My role at that point was somewhere between designer, PM, analyst, and person trying to survive a pandemic at a small but successful startup. Our CTO knew we had to get more technical to stay competitive, so he brought in a data scientist and pushed us all to engage with the work seriously. There was a lot of math, a Python library called pandas that I still can’t fully explain, and diagrams I stared at longer than I’d like to admit.Then he sent us a paper. I still think about that abstract, which is corny but true. It was called “Attention Is All You Need,” published in 2017 by a team of researchers at Google, and I’d argue it’s one of the most consequential things written in the last decade. The architecture it introduced, the transformer, is the foundation underneath every large language model you’ve used.What made it click for me wasn’t the math. It was the concept. Before this, AI processed information sequentially, word by word, often losing the thread by the time it reached the end of a sentence. The attention mechanism changed that. It let the model look at everything at once and learn what to focus on, what to weight heavily, what to set aside. The model developed something that looks a lot like judgment about relevance.The transformer architecture from “Attention Is All You Need”. Vaswani et al., Video: youtube.com/watch?v=iDulhoQ2proSo if attention is how the model decides what matters, vectors and clusters are where it actually puts that attention. Think of it this way: instead of storing words as definitions, the model represents them as points in space. Words with related meanings end up close together. Clusters form. “Invoice,” “payment,” and “receipt” live near each other. “Dashboard,” “metric,” and “report” form their own neighborhood. Meaning lives in the relationships, not in the words themselves. Which means if your team calls the same thing three different names, the model isn’t confused because it’s dumb. It’s confused because you are.This is how words clustering would look like, each word being a vector. Visualization generated by the author.The real reason your AI keeps getting it wrongAs designers we wear a lot of hats. The one that matters most right now isn’t UI, or systems, or even UX. It’s language.We use words to make sense of things. Without shared definitions, even a simple diagram like this becomes ambiguousThis past week I was presenting analytics to a client and he stopped me mid-sentence: “what does intent mean, what does it mean to you and your team, because I’m understanding it differently and I’m not sure the client even meant that.” Intent was the output of the LLM. Not necessarily wrong. We had just all been naming the same thing differently, and nobody had caught it until that moment. When it was just humans in the room, that was an awkward meeting. Now that AI is in the room too, it’s the kind of misalignment that ships.The moment AI is in the loop, language gaps stop being a coordination issue and start being a product issue. The model doesn’t ask for clarification. It doesn’t flag ambiguity. It just acts on whatever definition it inferred, confidently, at scale.So what does it take to make sure the model is working from the same definitions as your team? That’s what a data model is for. The following is a practical guide to building one with your team, for whatever AI product you’re working on, so it makes sense to you, the AI, and ultimately the people you’re building for.Before you build anything, name the things: entitiesFor the sake of this exercise, let’s start fresh. We’re building a food delivery app. Such an innovation, I know. FYI entity is any thing in your system that has a name, a definition, and attributes that describe it. The noun your whole team needs to agree on before anyone builds anything.Before we list a single entity, we need to know what we’re trying to make someone feel. In this case: a hungry person orders food and it shows up. Fast, correct, no friction. That experience is the constraint. Everything we define has to serve it.Eight entities. Named, defined, one sentence each. This is where the data model starts.So let’s list what we actually need for that to work:User: the person ordering food, they have a location, preferences, and an order history.Restaurant: the place preparing the food, they have a menu, hours, and a location.Menu item: a specific dish or product a restaurant offers, it has a name, a price, and it belongs to a restaurant.Order: the transaction connecting a user to a restaurant, it has a status, a list of items, a total, and a timestamp.Driver: the person delivering the order, they have a location, availability, and an assigned order.Delivery: the physical act of getting the order from restaurant to user, it has a route, a status, and an estimated time.Payment: the financial record of the transaction, it belongs to an order and has a status.Review: feedback left by the user after the delivery, it references the order, the restaurant, and sometimes the driver.Those are entities. Defined, named, one sentence each. That’s step one.You probably already work with a version of this and don’t call it that. Designers will recognize it as an entity relationship diagram, a tool for mapping how concepts connect. PMs have fragments of it scattered across acceptance criteria and data requirements in tickets. This isn’t a new idea. Engineers have been doing this for decades, and have been asking for this shared language for a while. Three versions of the same map, rarely shared.What’s changed is that AI made it everyone’s problem. When a model has to reason over your product, vague definitions don’t just slow down a sprint. They become the product.Your entities have something in common. Find it.A list is not a model. A list is just words. To make it useful we need to do what the transformer did with language: cluster things that belong together.he same eight entities, now grouped by what they have in common. A list becomes a model the moment things belong somewhere.Our eight entities don’t all live in the same neighborhood. Let’s group them:People and identity: User, Driver. These are the humans in the system, each with their own context, permissions, and goals.The offer: Restaurant, Menu Item. This is what’s available, what can be ordered, what belongs to whom.The transaction: Order, Payment. This is the moment something happens, money moves, a commitment is made.The experience: Delivery, Review. This is what the user actually feels, the wait, the arrival, the reflection after.You could take this clustering and sketch wireframes right now, but it’s still not enough. Because we haven’t told the system how these clusters relate to each other.This is where most teams stop too early.The relationships your AI can’t guessIt might feel obvious that an Order belongs to a User, or that a Delivery references a Driver. But obvious to a human and explicit to a system are very different things. The model can’t infer relationships the way we do. It needs them stated.When working with AI tools, giving them more stuff doesn’t always give you better results. I learned this the hard way while trying to spin up a proof of concept. I had Figma Make, a PRD, a design system, a list of requirements. I thought it would take an afternoon. It did not. The outputs were generic, misaligned, and confidently wrong in ways that were hard to even explain. The tool just didn’t know where to put its attention.That’s when I dove into data models seriously.The entity relationship diagram with connectors labeled. Every arrow is a decision the model no longer has to guess.Let’s make the relationships explicit, using three questions:What does it produce? One thing creates another as a direct result. A Restaurant produces Menu Items. An Order produces a Payment. A completed Delivery produces a Review.What does it reference? One thing points to another that exists independently. An Order references a User, a Restaurant, and one or more Menu Items. A Delivery references an Order and a Driver. Neither owns the other, they just know about each other.What does it influence? One thing shapes how another behaves or is prioritized, without creating it directly. User preferences influence which Restaurants surface first. Order history influences delivery time estimates. This is the relationship type that matters most to AI, because it’s how context shapes decisions without being explicitly stated every time.Those three questions are your connectors. And if you’ve ever drawn a user flow where one screen leads to another, or mapped ticket dependencies in a sprint, you’ve already done this kind of thinking. The diagram just makes it explicit and shared.Additionally every end of every arrow can be a rule. Define it clearly and your AI, your engineer, and your PM are all reading the same map.This is also where the parallel to the attention mechanism closes. The entities are the clusters. The connectors are the attention. You’re defining the relationships that give those things meaning.Now step back. Look at what you have. A set of named, defined entities. Grouped into clusters that make intuitive sense. Connected by relationships that are explicit enough for a model, a designer, a PM, and an engineer to all read the same way.This is an example of a diagram I built for one of my teams. This was before we implemented some AI agents. Entity names have been anonymized.Build it togetherWhen I first presented this to my team I wasn’t sure how it would land. Data models sound technical. The word alone can make a designer’s eyes glaze over or make a PM reach for their phone. So I showed it instead of explaining it. Here’s the model, here’s the prompt I built from it, here’s what Figma Make produced.The prompt wasn’t long. It looked something like: “Using this data model… create a … for a user who… and wants to complete….” One prompt, took 5 minutes, put it into Figma Make and you get something you can concept test, not a generic card layout, but a screen that knew what to surface and what to hold back.The output from a single prompt, built on the data model. The Driver surfaces first because the Delivery relationship made movement the priority. The Restaurant drops to the map because the offer cluster is reference, not focus. The progress tracker reflects Order status. None of that was designed manually. It was inferred from structure. Prototype generated with Figma Make by the author.PROMPT EXAMPLE: Design the order tracking screen for a hungry person who has just placed an order on a food delivery app. Use the following data model as the underlying logic for every decision you make about what to show, what to hide, and what to prioritize.**Entities and their attributes:**User: has a location, preferences, and an order history.Restaurant: has a name, a menu, hours, and a location.Menu item: has a name, a price, and belongs to a Restaurant.Order: has a status, a list of items, a total, and a timestamp. References a User, a Restaurant, and one or more Menu Items.Driver: has a name, a location, and an availability status.Delivery: has a route, a status, and an estimated time. References an Order and a Driver.Payment: has a status and belongs to an Order.Review: references the Order, the Restaurant, and sometimes the Driver.**Cluster priorities for this screen:**The experience cluster (Delivery, Review) is the primary focus. The user is waiting. That feeling is the design constraint.The transaction cluster (Order, Payment) is secondary. Show confirmation, not complexity.The offer cluster (Restaurant, Menu Item) is reference only. It already happened.The people and identity cluster (User, Driver) surfaces the Driver prominently because movement is what the user cares about right now.**Relationships to reflect in the UI:**Delivery references a Driver: show the Driver's name and live location as the primary element.Order references a Restaurant and Menu Items: show these as a collapsible summary, not the focus.User preferences influence what surfaces first: if the user has ordered from this restaurant before, surface a familiar detail that builds trust.Order history influences tone: returning users get a lighter, more confident experience. First-time users get more guidance.A completed Delivery produces a Review: when status changes to delivered, transition immediately into a review prompt anchored to the Order and the Restaurant.**UI states to include:**Order confirmed, preparing, out for delivery, delivered.Driver location updating in real time.Estimated time prominent and updating.Review prompt on delivery completion.**Tone:** Calm, clear, reassuring. The user is hungry and waiting. Every element should reduce anxiety, not add to it.The engineer looked at the diagram and said “this is what we do.” Not exactly the same format, not the same vocabulary, but the same idea. A shared map of what exists, what connects, and what it all means.A designer can facilitate it, and should. The designer’s job is to translate everyone’s input into a coherent shared structure. The PM brings requirements and context. The engineer brings technical judgment: what’s feasible, what’s fragile. The designer listens, organizes, and maps. By the time you’re done you have something the engineer can build from, the PM can write tickets against, and the AI tool can actually reason over.The two logical next steps that AI came up with based on the information it had, such as the goal and the data model: where to save menu items for future orders, and give and adjust tip for the driver after a delivery was completed. Results are no longer random or disconnected from the experience. Prototypes generated with Figma Make by the author.This connects directly to the first principle in my previous piece on Agentic UX: think systematically.Agents rely on structured systems to act consistently. If the product’s flows, labels, and hierarchies are messy, agent actions will feel random or confusing.The data model is where that systematic thinking starts, before the wireframes, before the tickets, before the prompts. It’s the foundation everything else builds on.Where your product sits on this spectrum determines how ready it is for agents to act within it reliably. Framework from the author’s work on agentic UX.Before your next sprintWhen an AI tool doesn’t deliver, it’s rarely the model’s fault. And it’s rarely the prompt, at least not in isolation. A prompt can only reason over what it’s been given. If the definitions are vague, the relationships undefined, the language inconsistent across your team, no amount of prompt craft will close that gap.The data model is the cheapest investment you can make before building anything with AI.Three steps. In that order. The data model lives in step two, but nothing works without step one, and step three is the whole point.A data model saves you from the meeting where everyone argues about what “intent” means. Clarity upfront is the most efficient thing you can do. For your team, for your tools, and for the AI that will eventually try to reason over everything you’re building.Also worth knowing: 51% of Figma users working on AI products are now building agents, up from 21% the year before. The teams moving fastest are the ones who figured out the foundations first.And if you want to take this one step further, the next piece in this series goes deeper into how to turn this map into a task diagram your whole team can build together, and how to use it to test a concept before a single ticket gets written. Because a shared language is only the beginning. What you do with it is where it gets interesting.The task diagram is an artifact that teams and AI use to quickly create deliverables. More on this in the next piece.References“Attention Is All You Need,” Vaswani et al., Google, 2017“I Finally Understood ‘Attention Is All You Need’ After So Long. Here’s How I Did It,” Olubusolami Sogunle, 2025Attention Is All You Need [Video], Yannic Kilcher, 2017What is an entity relationship diagram, Figma Resource LibrarySpeaking the Same Language: Engineering and Design in Product Teams, SafetyCulture Engineering, Medium, 2024Agentic UX: 7 principles for designing systems with agents, Alexandra Vasquez, Medium Bootcamp, 2026Figma 2025 AI Report, Figma Blog, 2025Data models: the shared language your AI and team are both missing was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
We didn’t mean to build this— engagement at any costHow well-meaning designers became complicit in broken systems and why handing those same briefs to AI could prove catastrophic.Image generat...
We didn’t mean to build this— engagement at any costHow well-meaning designers became complicit in broken systems and why handing those same briefs to AI could prove catastrophic.Image generated with AIGood intentions, broken systemsA New Mexico state court slapped Meta with a $375m fine on 24 March ’26, for misleading users about their platform’s safety. For a trillion-dollar organisation this amounts to a speeding ticket. But what makes this landmark ruling so interesting, is that the winning argument called into question the design features of their applications, citing these were to blame with charges including a failure to protect minors.The sad part is that in real terms, we haven’t learned anything that we didn’t already really know; it’s been well documented that social media apps exacerbate addictive tendencies and can negatively impact personal behaviours. As much as the tech CEOs deny it, they typically place strict restrictions on the apps and devices their own children are allowed to use.So how did product designers that pride themselves on using user research and evidence to inform decisions end up allowing it to reach this point? I’ve yet to meet a designer that actively seeks to make a product that is knowingly harmful, in fact designers want to delight their users, so what are the factors that drive organisations to end up in such a place? And are we as designers complicit?This is not a new human failing. Rutger Bregman argues in his 2020 book Humankind: A Hopeful History that individuals are fundamentally decent but are capable of doing terrible things if they believe they are doing good. As Bregman writes, “If you push people hard enough, if you poke and prod, bait and manipulate, many of us are indeed capable of doing evil. The road to hell is paved with good intentions.” This feels uncomfortable to hear but it rings true, as well-meaning designers end up embedded in systems that cause real harm.Let’s step back and look at the general trajectory of big tech experiences over the last few decades, organisations that once voraciously championed an effortless user experience to attract customers, have now turned this proposition on its head. Customers in ring-fenced eco-systems are now the target of being exploited for profit at all costs, only tolerating poorer, more costly experiences, simply because switching out is so inconvenient. This is referred to as ‘platform decay’, but you may be more familiar with the more colourful term “enshittification” coined by Cory Doctorow to describe exactly this effect.So what is this perfect storm which brings forth these unintended yet reprehensible outcomes to manifest? It typically starts with the definitions of success, a matrix of engagement-heavy user-metrics coupled with aggressive growth and retention targets. The matrix acts as a proxy for profit measures, and prioritising profit attainment quickly surpasses any other factors. The consequential human costs of attaining these targets are not reflected in the dashboard. Now targets resolve to become just numbers to reach by the end of the quarter, through designing the right levers.“The consequential human costs of attaining these targets are not reflected in the dashboard”Overlooking those consequential problems that “might” manifest if targets are attained are not treated as concerns, because they’re not today’s problems. In fact they’re treated as “nice problems to have” and can be dealt with in the future, if we get there. But through incremental gains it doesn’t take long for these goals to appear in the product’s rear view mirror.With the engine running at full speed, nobody wants to reduce the momentum. Financial incentives, quarterly time pressures and external market pressures all steep in a culture which ignores any recourse but zealously embraces the maxim: “move fast and break things.” It starts to become apparent how these success matrices shape design briefs.This is common across tech, we’ve seen this play out to a similar extent with security being deprioritised in products, whereby doorbell cameras or web-connected children’s toys have had gaping security flaws, because safeguarding has always been a distant second when it comes to representation in the success matrix. Alarms were only ever raised because the consequences were direct and easier to spot, than a toxic algorithm embedded as the core feature of a product driving billions in revenue.The problem in creating such a space, is that it not only lets bad outcomes manifest it makes their manifestation inevitable.Where do designers sit in all of this? We aim to create successful product levers that move these metrics. But by narrowly focusing on lever design, we only see those quarterly targets on the horizon, put our blinders on and begin racing to that finish line.As we move forward with broken briefs so closely aligned to profit, it becomes easy to justify cutting corners and drift incrementally further from the original intent. The rewards grow greater with each step, until you are racing ahead on questionable practices, and being pulled over for reckless endangerment by the authorities.We need to ask ourselves is this the right finish line? It becomes even more pertinent a question when the brief is not given to a designer but an agent.Black boxes briefing black boxesIf we are constantly deviating due to these broken briefs, what might happen when we want to do this very quickly at scale? When we pass these flawed briefs to AI agents, are we multiplying the problem?AI models are non-deterministic, in that we don’t know exactly what it’ll output even when we supply identical inputs. So when one AI agent inherits a flawed brief, it passes its interpretation on to another, and so forth, a series of black boxes briefing one another; each hop can introduce a deviation and after a few agents down the chain, we can find our original intent has been heavily diluted or simply misconstrued. This premise makes Nick Bostrom’s paperclip maximiser scenario feel like it could very well ring true, where an AI is tasked to make paper clips but ends up converting all matter, including humans to paperclips.Intents can be quickly lost through agent chains (Image generated by AI)“A series of black boxes briefing one another; each hop can introduce a deviation.”By design AI seeks novel solutions and we as humans encourage this behaviour, we’re seeking to push the limits of generating creative solutions. But when we combine the need for innovation whilst optimising for engagement, without a clear ethical framework setting boundaries and constraints, we are unlikely to be prepared for the outcomes that arise from the system.The stakes are further raised by the fact that product building has become cheap, so we’re seeing more organisations skipping prototyping and going straight to live population testing. The user base becomes a petri dish; moments for ethical reviews are replaced by a statistically significant result.How inhumane does a system become? When key decisions are made along an entire agent chain, each one might be locally rationalised, but who oversees the chain as a whole?In Meta’s case if we fast forward a few years, let’s suppose the egregiously designed features were all orchestrated and created by AI Agents, along with no individual intending to cause such harm or any direct malicious intent in the prompts and no single decision acting as the lynchpin to apportion blame, who is responsible? The designer that manages the agents? The business that sets aggressive metric targets? The operations team that didn’t enforce governance? Or everyone in this chain that didn’t stop it?Who is to blame?The tools exist. The will does not.Ethical frameworks have been around for years. There are a number of prominent frameworks like Value Sensitive Design, the Santa Clara Ethical Toolkit, Ethics for Designers, Ethical by Design; all of which are applied practical toolkits for ethical design practices and largely ignored in organisations.Applying these frameworks in practice would generate some costly “ethical friction” and likely impede growth, introduce safeguards for users — reducing prolonged engagement metrics, and in effect impact profit. Ultimately it isn’t down to a lack of understanding or knowledge but the choice comes down to people vs profits and time and time again profits are chosen over people.The uncomfortable truth is there isn’t a commercial incentive to adopt such a framework, and truth be told we as designers are complicit.The Meta ruling is a bellwether, regulation enforcement is coming, big-tech has had its free lunch at the expense of users for too long; whether they like it or not individuals are no longer going to idly sit and be missold and exploited.Another comparable example is in the EU, where European regulators have moved beyond GDPR data policies and have begun enforcing the Digital Services Act (DSA) with regards to design related issues. X were fined €120M for their blue check mark, a design feature that was in violation of the DSA, misleading users and exposing them to more scams.The Digital Fairness Act (DFA) currently in draft aims to go further by clearly outlining dark patterns, addictive design, unfair personalisation and profiling, misleading influencer marketing, unfair pricing, and problems with digital subs and cancellation flows.Including ethical reviews as part of product design (Image generated by AI)“Each time we choose not to apply these ethical frameworks, we are making a choice to be complicit”With AI redefining how design operates and agents being assigned more responsibility, there is an even greater risk for ‘unintended’ consequences of unchecked design to precipitate very rapidly. These design briefs are our opportunity to set the course straight, by clearly articulating the outcomes we are seeking, alongside the outcomes and consequences we will not accept or have manifest. These are not limitations, they are our red lines, the guard rails we construct to keep our users safe.Ethical frameworks are not new, the knowledge and toolkits have been there in front of us for years, and the regulations are arriving regardless. As designers each time we choose not to apply these ethical frameworks whether in our designs or when briefing agents, we are making a choice to be complicit.Good intentions are not enough, to build better requires us to take action: to be more critical, to decide what to enforce, what we need to question and ultimately what we need to refuse.Further reading on this topicTech Leaders Can Do More to Avoid Unintended Consequences by Wired (May 2022)Advocating for People in a Profit-Driven World by People Nerds (Sept 2021)Are we all fundamentally good? Philonomist. interview with Rutger Bregman (Nov 2021)Is the UK falling out of love with social media? by Dan Milmo Global technology editor, The Guardian (Apr 2026)We didn’t mean to build this- engagement at any cost was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
The reported killing comes as multiple airstrikes hit areas around Tehran, including residential zones, amid a sharp escalation in the ongoing conflict.