Thought Piece

How to Buy AI: When in Doubt, Go Faster

The way we need to buy AI tools is different than the legacy model, we have some thoughts.

Oct 4, 2024

The government needs AI, like…now.

Let's kick things off with a fundamental point that we really need to internalize: The nature of emerging tech and AI solutions is inherently iterative. What does that mean for us? It means we can't stick to our old, rigid ways of thinking about buyer and seller relationships. We need to evolve. We need to create partnerships that are flexible, dynamic, and responsive. These partnerships need to be able to shape and reshape requirements and solutions on the fly. It's not about signing a contract and walking away anymore. It's about creating a collaborative environment where both government and industry can adapt quickly to new developments, insights, and challenges. This is a major shift from our traditional procurement methods, but it's absolutely necessary if we want to harness the full potential of AI.

Now, let me give you the bottom line up front, because this is crucial: We simply cannot buy AI the way we've bought things in the past. It's not just that our old methods are suboptimal – they're fundamentally incompatible with the pace of AI development. AI is moving at a breakneck speed, with new breakthroughs and capabilities emerging almost daily. Our traditional procurement cycles, which can take months or even years, are just too slow. By the time we've finished procuring a solution, it might already be outdated. We need to be agile, we need to be responsive, and above all, we need to be fast. When in doubt, go faster – that's not just the title of this talk, it's a mantra we need to adopt if we want to stay relevant in the AI age.

So, why are we even having this conversation? Well, it all started with a bang – or should I say, a chat. OpenAI's ChatGPT burst onto the scene and suddenly, AI wasn't just a buzzword or a far-off concept anymore. It was right there, accessible to anyone with an internet connection. ChatGPT came in saying, "Hi, I'm ChatGPT, I can make your life way easier... kinda... if you know how to be a 'Prompt Engineer'." And just like that, the game changed. We had this incredibly powerful tool that promised to revolutionize how we work, but it came with a catch – you needed to know how to use it effectively. This "prompt engineering" concept introduced a whole new skill set that most of us hadn't even heard of before. It was exciting, it was powerful, but it was also a bit daunting. And it raised a crucial question: How do we integrate tools like this into government operations effectively and responsibly?

As you can imagine, this led to a wave of excitement and, let's be honest, a bit of envy among government employees. Suddenly, we had folks saying, "My life is hard... I want ChatGPT!" And can you blame them? The promise of AI to streamline workflows, automate tedious tasks, and provide instant information was incredibly alluring. But it wasn't just about making life easier. There was a recognition that AI could potentially transform how government operates, making it more efficient, more responsive, and more effective in serving the public. However, this enthusiasm also brought challenges. How do we ensure fair access to these tools? How do we address security concerns? How do we train people to use AI effectively? These were just some of the questions that started bubbling up as the excitement grew.

Then, as often happens with major technological shifts, the directive came from the top. The White House essentially said, "Thou shalt do AI." Now, on one hand, this was great. It showed that at the highest levels of government, there was a recognition of AI's importance and potential. It gave the green light for agencies to start exploring and implementing AI solutions. But on the other hand, it also created a sense of urgency – and sometimes, urgency can lead to hasty decisions. This top-down directive set the wheels in motion across all government agencies, but it also raised the stakes. Now, it wasn't just about exploring AI; it was about showing results. And as we'll see, this created both opportunities and challenges.

With the White House directive, government employees were energized. There was a palpable sense of excitement: "SWEET, let's do some AI..." People were eager to jump in and start implementing AI solutions. And that enthusiasm is great – it's exactly what we need to drive innovation and change. But enthusiasm alone isn't enough. While there was a lot of eagerness to get started, there wasn't always a clear direction on how to proceed. What does "doing AI" actually mean in practice? Which problems should we tackle first? How do we ensure we're using AI ethically and effectively? These were all questions that needed answers, and in many cases, agencies were figuring it out as they went along. This created a situation where there was a lot of activity, but not always with a clear strategy or end goal in mind.

As you might expect, it wasn't long before the Office of Management and Budget (OMB) came knocking. They're saying, "Hey, so POTUS said do AI, how's that going? Fill out this spreadsheet with all your AI." Suddenly, agencies had to account for their AI initiatives. Now, on one level, this makes sense. We need oversight and accountability, especially when we're dealing with new and potentially transformative technologies. But it also created a new pressure. Agencies now had to demonstrate that they were making progress on AI, even if they were still in the early stages of figuring out how to implement it effectively. This led to a rush to show results, to have something – anything – to put in that spreadsheet. And while accountability is important, we have to be careful that it doesn't lead to hasty or ill-conceived implementations just for the sake of having something to report.

This OMB inquiry led to a bit of panic in some federal agencies. I heard things like, "Crap, what AI are we doing?! We need to do more AI." There was a sudden rush to implement AI, sometimes without a clear strategy or understanding of how it would actually improve operations. This is a dangerous situation. When we implement technology, especially something as powerful and complex as AI, we need to do it thoughtfully and strategically. It's not about doing AI for the sake of doing AI. It's about identifying real problems that AI can solve, and implementing solutions in a way that truly enhances our ability to serve the public. But in the rush to show progress, some agencies may have put the cart before the horse, focusing on the technology first rather than the problems it could solve.

Now, as you might expect, industry saw all of this as a golden opportunity. They came in saying, "I'LL SAVE YOU!" And look, that's not necessarily a bad thing. We need strong partnerships with industry to implement AI effectively. They have the expertise, the resources, and often the cutting-edge solutions that government needs. But we also need to be careful. In the rush to adopt AI, there's a risk of buying solutions that aren't really tailored to our needs, or that don't integrate well with existing systems. We need to be smart consumers, not just buying the shiniest new AI tool, but really understanding what we need and how different solutions can meet those needs.

Unfortunately, what industry often offered was less than ideal. Companies who previously couldn't spell "AI" suddenly came out of the woodwork with, "Here is our black box proprietary AI solution, and 75-page prompting guide, you're welcome." Now, I'm not trying to bash industry here. They're doing what they think we want. But these black box solutions often left federal employees feeling underwhelmed: "This is... just kinda ok." And that's a problem. We don't need AI solutions that are just "ok." We need solutions that truly transform how we work, that solve real problems, and that can be understood and operated by our teams. Those 75-page prompting guides? They're a sign that the solution isn't intuitive enough, that it hasn't been designed with the end-user in mind. We need to push for better, for solutions that are powerful but also user-friendly and transparent.

So what happened? Federal agencies ended up saying, "We're prototyping and piloting AI!" And don't get me wrong, prototyping and piloting are important. We need to test solutions before we fully implement them. But too often, these weren't well-thought-out implementations. They were rushed, sometimes more for the sake of being able to say "we're doing AI" than for solving real problems. This is where we need to step back and rethink our approach. Prototyping and piloting should be strategic, focused on learning and improving, not just on checking a box. We need to be asking: What problem are we trying to solve? How will this AI solution help? What are the risks and how will we mitigate them? How will we measure success? Without this kind of thoughtful approach, we risk wasting resources on AI implementations that don't actually improve our operations or serve the public better.

This brings us to the crux of the matter: We need to be crystal clear about what problem we're trying to solve when we buy AI. It's not about having AI for the sake of having AI. It's not about jumping on the latest tech bandwagon. It's about identifying real, significant problems in our operations or in our service to the public, and then evaluating whether AI is the right solution to those problems. This requires a shift in mindset. Instead of starting with the technology and trying to find a use for it, we need to start with our mission, our challenges, and our goals. Then we can look at how AI might help us achieve those goals more effectively. This approach not only leads to better implementations but also helps us justify the investment in AI. We can point to specific problems we're solving, specific improvements we're making. It's about strategic implementation, not just adoption.

So, why should we buy AI? There are several compelling reasons, and it's important that we understand them. First, AI can free up human focus. By automating routine tasks, it allows our workforce to concentrate on higher-level thinking and decision-making. Second, it can free up time for critical thinking. When we're not bogged down in repetitive tasks, we have more mental space for analysis and innovation. Third, AI can allow for more human interaction. By handling routine queries, for instance, it can free up our staff to have more meaningful, complex interactions with the public. Fourth, AI can unleash human creativity. By taking care of mundane tasks, it gives our teams more time and mental energy for creative problem-solving. Fifth, AI can lower barriers to entry for complex tasks, making specialized knowledge more accessible. Sixth, it can reduce learning curves, helping our workforce adapt more quickly to new roles or responsibilities. And finally, AI can reduce single points of failure by distributing knowledge and capabilities across systems. Understanding these benefits helps us target our AI implementations more effectively.

Now, where should we start with AI implementation? The answer is simple, but powerful: Start with toil. What do I mean by toil? I'm talking about those repetitive, time-consuming tasks that don't require much creative or critical thinking. These are the tasks that often frustrate employees, that take up a lot of time but don't provide much value. They're the perfect starting point for AI implementation. Why? Because automating these tasks can have an immediate, tangible impact. It frees up our workforce to focus on more important, more interesting work. It reduces errors that can creep in with repetitive tasks. And it often provides a clear, measurable return on investment. By starting with toil, we can demonstrate quick wins, build momentum for our AI initiatives, and learn valuable lessons that will help us as we move on to more complex implementations.

Let's look at a typical process flow in government acquisition to illustrate where we might find this toil. From developing new requirements to awarding contracts, there are numerous steps that are ripe for AI intervention. Think about the time spent on developing new requirements. How much of that is repetitive work that could be aided by AI? Or consider the back-and-forth of sending in forms, getting feedback, and revising. Could AI help streamline this process? What about market research? Could AI help us gather and analyze information more efficiently? As we move through solicitation, evaluation, and award, there are numerous points where AI could potentially reduce toil, speed up processes, and improve accuracy. By mapping out these processes and identifying the pain points, we can start to see where AI might have the biggest impact.

Now, let's talk about how we can define toil more precisely. Picture a graph with two axes. On one axis, we have the level of creative or critical thinking required for a task. On the other, we have time consumption. The worst toil lives in the quadrant where the required thinking is low, but the time consumption is high. These are our prime targets for AI implementation. These are the tasks that are eating up a lot of our time and resources, but not really utilizing our human capabilities to their fullest. By identifying these tasks, we can prioritize our AI initiatives for maximum impact. It's about working smarter, not harder, and using AI to augment our human workforce in the most effective way possible.

As we move down the time consumption scale, the impact gets lower, because the potential for time saving gets lower, but there is still value here. Automating and applying AI to these low-cognitive-load tasks free the humans up to do high-level thought tasks. This entire bottom tier of work is where folks should be looking first. Don't look for the flashiest or coolest AI, look for the AI with the great impact in the context of your mission and people.

As we move up the scale of required thinking, we can still find AI use cases, but they become more complex and potentially less immediately impactful. These might be tasks that require some degree of analysis or decision-making, but still have elements of repetition or pattern recognition that AI can assist with. For example, AI might not be able to fully automate complex policy analysis, but it could help by summarizing relevant documents, identifying trends, or flagging potential issues for human review. In these cases, AI becomes more of an assistant or augmentation tool rather than a full automation solution. These use cases are still valuable, but they often require more sophisticated AI systems and more careful implementation. They're typically best tackled after we've gained some experience with simpler AI implementations

Eventually, we reach a point where the tasks require so much creative and critical thinking that it's going to be a while before AI is truly useful there. These are the tasks that really leverage uniquely human capabilities – things like high-level strategy development, complex negotiations, or innovative problem-solving. For now, these areas remain primarily in the human domain. But that doesn't mean AI has no role to play. Even in these highly complex tasks, AI can still assist by providing information, analyzing data, or suggesting options. The key is to understand the limitations of current AI technology and to use it as a tool to enhance human capabilities, not replace them. As AI continues to advance, we may see it taking on more of these complex tasks, but for now, our focus should be on using AI to support and augment human decision-making in these areas.

Don't shoehorn AI where it doesn't belong. This is a trap that's all too easy to fall into, especially when there's pressure to "do AI." But forcing AI into processes or problems where it's not the right solution can lead to wasted resources, frustrated employees, and failed projects. Instead, we need to start with our mission and the problems we face. What are we trying to achieve? What obstacles are we encountering? Once we have a clear understanding of our challenges, then we can evaluate whether AI is the right tool to address them. Sometimes, the answer will be yes. Other times, we might find that a simpler automation solution, a data analytics approach, or even a process redesign might be more appropriate. The key is to be problem-focused, not technology-focused. AI is a powerful tool, but it's not a magic wand that can solve every problem.

Here's something we need to be aware of: Right now, vendors are trying to cram AI into everything because that's the market signal they're getting. We've created a gold rush mentality around AI, and vendors are responding accordingly. They're putting "AI-powered" labels on products that might only be using the most basic of algorithms. They're promising AI solutions to problems that might not need AI at all. And why? Because that's what they think we want. If we want something different – and we should – we need to send a different signal. We need to be clear about our problems and our needs. We need to ask tough questions about proposed AI solutions. We need to demand transparency about what the AI is actually doing and how it's making decisions. By being more discerning consumers, we can shape the market to provide the kinds of AI solutions we actually need, not just the ones that sound impressive in a sales pitch.

It's crucial to remember that AI is just one potential solution in our toolkit. Your problems might be better solved with data solutions, automation solutions, or process solutions. IN MOST CASES, ALL OF THE ABOVE. Be open to all possibilities. Sometimes, what looks like an AI problem might actually be a data quality problem. Other times, a simple automation script might be more effective than a complex AI system. And often, rethinking and redesigning our processes can yield better results than trying to layer AI on top of inefficient systems. The key is to approach each challenge with an open mind. Start by clearly defining the problem, then consider all potential solutions. AI might be part of the answer, but it's rarely the entire answer. By keeping our options open and our focus on solving problems rather than implementing specific technologies, we can ensure that we're using the right tool for each job.

Let's recap what we've covered so far: Before buying AI, define the problem you're trying to solve. This can't be emphasized enough. Without a clear problem statement, you risk implementing solutions in search of a problem. Lay out what you do, find the lowest ROI parts, and start there. This approach ensures that your AI investments will have a noticeable impact from the start. It also helps you build momentum and support for further AI initiatives. Remember, don't put AI where it doesn't belong. Not every problem needs an AI solution, and forcing AI into inappropriate contexts can lead to costly mistakes. Ensure your AI solution provider is hyper-focused on your specific problems. Generic, one-size-fits-all AI solutions rarely deliver the results we need in government contexts. Look for vendors who are willing to really understand your unique challenges and tailor their solutions accordingly.

Now, let's shift gears and talk about what we're actually buying when we buy AI. This is a crucial point because there's a lot of confusion and misinformation out there. When we say "AI," we're actually talking about a wide range of technologies and approaches. Some AI systems are designed to handle general tasks, while others are highly specialized. Some use machine learning techniques that allow them to improve over time, while others rely on more static, rule-based approaches. Understanding these differences is crucial for making informed decisions about AI procurement. We need to be clear about what capabilities we're actually getting, what kind of data the AI needs to function, how it makes decisions, and how those decisions can be explained or audited. Without this clarity, we risk buying solutions that don't actually meet our needs or that create new problems in terms of transparency and accountability.

When it comes to AI, there are two main categories that you might need: General Purpose and Mission Specific. Some call these "broad" and "narrow" AI products. Understanding the difference between these is crucial for making the right procurement decisions. General Purpose AI is designed to handle a wide range of tasks. These are the jack-of-all-trades of the AI world. They can be incredibly versatile, but they may not excel in any one particular area. Mission Specific AI, on the other hand, is tailored for particular tasks or domains. These are the specialists, designed to handle specific types of problems or work within specific contexts. Both have their place, and understanding when to use each is key to successful AI implementation.

Let's dive deeper into the differences between General Purpose and Mission Specific AI. General Purpose AI can do a wide range of tasks. Think of tools like ChatGPT or other large language models. They can help with tasks like writing, coding, or even creative brainstorming. They're incredibly versatile, able to switch between different types of tasks with ease. This versatility can be a huge asset, especially in environments where needs might vary widely. However, this breadth often comes at the cost of depth. While a General Purpose AI might be able to help with many different tasks, it may not be the best tool for highly specialized or mission-critical functions.

Mission Specific AI, on the other hand, is tailored for particular tasks. In our context, this might be AI specifically designed to write acquisition and contracts documents, or to analyze budget data, or to assist with cybersecurity threat detection. These AIs are built with a deep understanding of a particular domain. They often incorporate specialized knowledge and rules that are specific to their area of focus. While they may not be as versatile as General Purpose AI, they can offer a level of precision and effectiveness in their specific domain that general tools can't match. For tasks that are central to our mission or that require a high degree of accuracy and compliance, Mission Specific AI is often the better choice.

Here's a key distinction that can help guide our decision-making: General Purpose AI is often 10% effective for 90% of use cases, while Mission Specific AI is 90% effective for 10% of use cases. What does this mean in practice? It means that General Purpose AI can be a great tool for a wide range of tasks, but it may not be the best solution for our most critical or specialized needs. It's like a Swiss Army knife – useful in many situations, but not always the best tool for a specific job. Mission Specific AI, on the other hand, excels in its particular domain. It's like a specialized surgical instrument – not useful for every task, but incredibly effective for the task it's designed for. Understanding this trade-off is crucial for making informed decisions about AI procurement and implementation.

So, how do we know which one we need? The key is to start by assessing your needs and the specificity of your tasks. Ask yourself: How specialized are the tasks you're looking to automate or enhance with AI? How critical are they to your core mission? How much domain-specific knowledge is required to perform these tasks effectively? If you're looking for a tool to help with a wide range of general tasks – things like drafting emails, summarizing documents, or basic data analysis – a General Purpose AI might be the way to go. But if you're dealing with tasks that require deep domain knowledge, have significant regulatory or compliance components, or are central to your agency's mission, you might want to look at Mission Specific AI solutions. Often, the best approach is a combination of both – using General Purpose AI for day-to-day tasks and Mission Specific AI for your most critical and specialized needs.

Let's recap the key takeaways about AI types: First, remember that there are two major categories of AI – General Purpose and Mission Specific. You probably need both, but in different proportions depending on your specific needs and context. General Purpose AI can be a great tool for improving overall productivity and handling a wide range of tasks. Mission Specific AI, while more limited in scope, can provide significant value in areas that are central to your agency's mission or that require specialized knowledge. You likely only need a couple of general tools – think of these as your AI Swiss Army knives. But you may need many mission-specific tools, each designed to handle a particular aspect of your work. And here's a critical point to keep in mind: mission-specific tools are often harder to buy. They require a deeper understanding of your needs, more careful evaluation, and often more customization. But for critical tasks, this extra effort can pay off in terms of effectiveness and accuracy.

Now, let's talk about a crucial shift we need to make: changing what we're buying when we buy AI. This isn't just about choosing between different AI products. It's about fundamentally rethinking our approach to AI procurement. In the past, we've often bought technology solutions as complete packages – all-in-one systems that promise to solve all our problems. But with AI, this approach can lead to inflexibility, vendor lock-in, and solutions that quickly become outdated. Instead, we need to start thinking about AI procurement in a more modular, flexible way. We need to be buying components and capabilities, not just products. We need to be thinking about how different AI tools can work together, how they can be integrated with our existing systems, and how they can be updated or replaced as technology evolves. This shift in thinking is crucial if we want to stay agile and effective in our use of AI.

So, what does industry want to sell you? Often, it's a complete stack: infrastructure, data models, application planes, and a black box AI solution to your problems. This can be tempting. It promises a complete solution, all from one vendor. But it also has significant drawbacks. These all-in-one solutions can be inflexible, making it difficult to adapt as your needs change or as AI technology evolves. They can create dependency on a single vendor, limiting your options in the future. And perhaps most importantly, they often operate as "black boxes," making it difficult to understand how decisions are being made or to ensure compliance with government regulations and ethical standards. While these complete stacks might seem convenient in the short term, they can create significant challenges down the road.

But there's a better way: a Modular Open System Approach. This approach breaks down AI systems into components – things like data storage, model training, inference engines, and application interfaces. Instead of buying a complete system from one vendor, you're buying or building these components separately and integrating them into a cohesive system. This approach allows for more flexibility and interoperability. If one component isn't working well or becomes outdated, you can replace it without overhauling your entire system. It allows you to choose the best tool for each specific function, rather than being locked into one vendor's ecosystem. And perhaps most importantly, it provides more transparency and control. You can see how each component works and how they fit together, making it easier to ensure compliance, security, and ethical use of AI.

With a modular approach, we can mix and match components from different vendors as needed. This is a game-changer in terms of flexibility and effectiveness. Maybe one vendor has the best natural language processing model for your needs, while another has superior data storage solutions. With a modular approach, you can use both.

This allows you to create a best-of-breed solution tailored to your specific needs. It also helps future-proof your AI investments. As new technologies emerge or your needs change, you can update or replace individual components without having to rip and replace your entire system. This approach does require more thought and planning upfront. You need to consider how different components will work together and ensure you have the expertise to integrate and manage a more complex system. But the benefits in terms of flexibility, effectiveness, and long-term value can be substantial.

With a modular approach, we can mix and match components from different vendors as needed. This is a game-changer in terms of flexibility and effectiveness. Maybe one vendor has the best natural language processing model for your needs, while another has superior data storage solutions. With a modular approach, you can use both. This allows you to create a best-of-breed solution tailored to your specific needs. It also helps future-proof your AI investments. As new technologies emerge or your needs change, you can update or replace individual components without having to rip and replace your entire system. This approach does require more thought and planning upfront. You need to consider how different components will work together and ensure you have the expertise to integrate and manage a more complex system. But the benefits in terms of flexibility, effectiveness, and long-term value can be substantial.

This modular approach extends to all layers of the AI stack: infrastructure, data and models, and applications. At the infrastructure layer, we're talking about things like cloud computing resources, data storage systems, and networking. By keeping this layer modular, you can more easily scale resources up or down as needed, or switch between different cloud providers. At the data and model layer, modularity allows you to use different data sources, experiment with different model architectures, or even combine multiple models for better results. And at the application layer, a modular approach allows you to more easily integrate AI capabilities into your existing software systems, or to create new AI-powered applications that can evolve over time. By thinking modularly at each layer, you create a system that's more flexible, more transparent, and better able to evolve as your needs change and as AI technology advances.

The key to making this modular approach work is using containers and APIs. Containers are a way of packaging software so that it can run reliably in different computing environments. They allow different components of your AI system to be developed, deployed, and scaled independently. APIs (Application Programming Interfaces) provide standardized ways for different software components to communicate with each other. Together, containers and APIs allow different components to work together seamlessly, even if they were developed by different vendors or at different times. This approach not only provides technical benefits in terms of flexibility and scalability, but it also has procurement advantages. It allows you to buy or build AI capabilities in smaller, more manageable chunks, rather than committing to large, monolithic systems. It gives you more options in terms of vendors and solutions. And it makes it easier to pilot new capabilities or replace underperforming components without disrupting your entire system.

But here's the catch: this modular, API-driven approach requires vendors to play nice in the sandbox. They need to design their products with interoperability in mind, to adhere to common standards, and to provide clear, well-documented APIs. This isn't always the natural inclination in a competitive market where vendors often want to lock customers into their ecosystem. So how do we encourage this behavior? The answer is simple: vendors will do what you incentivize. If we make interoperability, modularity, and clear APIs key requirements in our procurement processes, vendors will respond. If we prioritize solutions that can easily integrate with other tools and systems, that's what vendors will provide. This approach not only benefits us as buyers, but it can also foster a more innovative, competitive market where vendors compete on the quality of their individual components rather than on the comprehensiveness of their closed ecosystems.

IMPORTANT NOTE: You can't expect direct competitors to play nice together. Two different data providers might be willing to feed the same app, but they are never going to feed one another.

The beauty of this modular, API-driven approach is that it unlocks the ability to experiment, pilot, and buy capabilities as components. This is a fundamental shift from traditional IT procurement. Instead of committing to large, long-term contracts for complete systems, we can start small. We can pilot a new AI capability in one department or for one specific use case. If it works well, we can easily scale it up. If it doesn't meet our needs, we can try something else without having wasted a massive investment. This approach aligns much better with the rapid pace of AI development. It allows us to take advantage of new advances as they emerge, rather than being locked into the technology that was current when we signed a big contract. It also aligns better with agile development methodologies, allowing for continuous improvement and adaptation. By buying capabilities as components, we can build AI systems that evolve with our needs and with the state of the art in AI technology.

To sum up: AI is moving too fast to buy monolithic black box solutions. The only way to keep up is to have the ability to swap components in and out. This requires open architectures, containers, and APIs. It also requires vendors to play nice together, which means you need to buy in a way that incentivizes collaboration. This might seem like a big shift, and it is. It requires changes in how we think about procurement, how we structure contracts, and how we manage AI systems. But it's a necessary shift if we want to truly harness the power of AI in government. It allows us to be more agile, more innovative, and more effective in our use of AI. It helps us avoid vendor lock-in and keeps us from being stuck with outdated technology. And perhaps most importantly, it gives us more control and transparency over the AI systems we're using, which is crucial for maintaining public trust and ensuring ethical, responsible use of AI in government.

Remember, it's actually pretty simple. Don't let the techno jargon scare you. Focus on the problem you're trying to solve, choose the right tool (which may or may not be AI), select the right form of AI if that's the solution, and implement it in a way that allows for flexibility and growth. When in doubt, go faster – but make sure you're going in the right direction.

By adopting this modular, problem-focused approach to AI procurement and implementation, we can ensure that we're not just doing AI, but doing AI right. We can create systems that truly serve our missions and the public, that evolve with our needs and with technological advancements, and that uphold the highest standards of effectiveness, efficiency, and ethical use. The future of AI in government is modular, flexible, and focused on solving real problems. Let's embrace it.