I am Reem Alattas. I redesigned workflows for AI, and ambition became execution.

6 min
How she thinks about decisions under uncertainty
When asked about the overlap between AI strategy, space data, and entrepreneurship, Reem Alattas frames all three as disciplines of judgement under imperfect information. Space data trained her to think at planetary scale, to read the world as an interconnected system rather than a series of local problems. AI strategy, in her telling, is the act of compressing that complexity into something executable, a digital roadmap rather than an abstract promise. Entrepreneurship provides the constraint that keeps the whole system honest. It forces decisions to be made quickly, with incomplete data, and with consequences that cannot be deferred. Put together, the question shifts from whether something can be built to whether it should be built, and what its systemic impact will be. Her attention is less on code and more on the horizon it creates.
What orbit teaches about AI’s real impact
On the question of how NASA data shaped her thinking, Alattas describes a recalibration of priorities. From orbit, she says, headlines disappear and patterns of survival come into focus. Climate stress, urban expansion, and resource pressure appear not as debates but as signals. That experience moved her away from viewing AI as a tool for marginal efficiency gains and towards seeing it as an instrument of foresight. When AI predicts floods or helps optimise harvests in water stressed regions, she argues, it ceases to be innovation theatre and becomes a lifeline. Impact is measured in avoided harm, not technical novelty.
Translating AI into national outcomes
Pressed on her current work aligned with Saudi Vision 2030, Alattas is blunt about method. She starts with outcomes, not algorithms. Vision 2030 prioritises productivity and diversification, so AI must be tied directly to national KPIs. In practice, that means deploying systems that compress time or eliminate leakage across procurement, infrastructure, and citizen services. If an AI deployment does not move a national metric, she considers it an expensive hobby rather than a strategy.
Why some AI programmes deliver ROI
When the conversation turns to return on investment, Alattas reduces success to three conditions. There must be clear ownership with real accountability, clean data plumbing, and the courage to let machines influence decisions. The initiatives that fail are often housed in innovation labs without P&L responsibility. In her assessment, AI rarely fails on its own terms. Governance fails first.
The bolt on fallacy in execution
Asked to identify the biggest gap between ambition and delivery, Alattas points to what she calls the bolt on fallacy. Organisations want the magic of AI without redesigning workflows around it. Legacy assumptions remain untouched, especially around budgeting and risk. Adding AI to an unchanged process produces friction, not lift. Execution, she argues, requires killing outdated beliefs before adding new capabilities.
Institutional readiness at national scale
On the question of readiness, Alattas is unequivocal. It is the only thing that matters. Models can be purchased, but institutional muscle cannot. Scaling AI nationally demands regulatory harmony, where policy, data interoperability, and talent speak the same language. Without that alignment, AI fragments into symbolic pilots rather than becoming a transformative force.
Why she chose to found ventures
Asked to reflect on her move into founding companies, Alattas cites frustration with the innovation gap. She watched high quality research stall in slow adoption cycles. Founding ventures gave her both skin in the game and speed. It shortened the distance between idea and impact, replacing slide decks with real world tests.
What hardware taught her the hard way
When asked about Rumble Lights and its global recognition, Alattas focuses on difficulty rather than accolades. Hardware, she says, is a lesson in humility. Software errors can be patched overnight, but manufacturing mistakes become physical facts that cost months. Managing supply chains and international IP while maintaining safety standards forced her to confront the complexity of the physical world. The hardest part of technology, she learned, is orchestration, not code.
Measuring societal impact alongside profit
On evaluating impact, Alattas looks for incentive alignment. She asks whether a product can scale responsibly. Profit that creates dependency or inequality is unstable by design. She believes the most durable ventures of the next decade will be those where commercial success and social benefit are mathematically linked, not rhetorically aligned.
How AI is changing finance and procurement
When the conversation turns to finance, Alattas describes a shift from reactive reporting to predictive control. Finance once asked what was spent. AI now asks what will happen if no intervention occurs. This reframes procurement from a back office function into a strategic command centre, turning buyers into strategists who act before losses compound.
From automation to augmentation
Asked to assess the last decade of enterprise AI, Alattas traces a move from automation to augmentation. Early efforts focused on replacing repetitive tasks. Today the aim is to amplify human judgement. The debate is no longer framed as human versus machine, but as how a human in the loop can make better decisions at far greater speed.
AI and a Saudi creative renaissance
On media, gaming, and sports, Alattas sees AI as an engine for cultural production. Saudi Arabia, she argues, is shifting from consuming global content to producing its own intellectual property. AI enables hyper personalisation and immersive worlds that reflect local stories. She frames this as a creative renaissance aligned with national ambition.
The education gap the industry feels
When asked about talent, Alattas draws a sharp distinction. Education teaches people how to build the hammer. Industry needs people who know where to build the house. There is no shortage of model builders. There is a shortage of AI translators who can bridge ethics, economics, and system design.
Why executives must keep learning
Pressed on executive education, Alattas calls it non negotiable. AI evolves faster than organisational memory. Leaders who lack strategic literacy in AI are making decisions based on an obsolete world. For her, continuous learning is about mental agility, not certificates.The barriers women still face in deep techAsked to reflect on gender, Alattas points to the potential versus proof gap. Women are often evaluated on past performance, men on future promise. Closing that gap will not come from speeches, she says, but from structural changes in how capital is allocated and talent is recognised.
What could make the Kingdom globally competitive
On global competition, Alattas identifies what she calls a triple threat. The speed of a startup, the scale of national data, and aligned capital through public private partnerships. If policy and technology continue to integrate at pace, she believes Saudi Arabia will set standards rather than chase them.The applications that matter nextWhen asked about what excites her, Alattas points to agentic AI and digital twins. Systems that act, not just converse, and simulations that allow urban or climate policies to be tested virtually before a single riyal is spent.
One priority that outweighs the rest
Asked to name a single priority for the next five years, Alattas returns to decision intelligence. Capability without authority creates noise. The future, she argues, belongs to organisations and nations that trust their data enough to let it shape real decisions, and with them, their destiny.







