One question we frequently hear from clients, especially those managing web content, is “How can we implement accessibility best practices without breaking the bank or overwhelming our editorial team?”
It’s a valid concern. As a content editor, you’re navigating the daily challenge of maintaining quality while meeting deadlines and managing competing priorities.
When your team decides to prioritize website accessibility, the initial scope can feel daunting. You might wonder “Does this really make a difference?” or “Is remediation worth the effort?” The answer is always a resounding yes.
Whether you’re working on a small site or managing thousands of pages, accessible content improves user experience, ensures legal compliance, boosts SEO performance, and reinforces your brand as inclusive and responsible. As a content editor, you have the power to make steady, meaningful progress with the content you touch every day.
Why Accessibility Creates Business Impact
Accessible content delivers measurable outcomes across multiple business objectives:
Expanded Market Reach: When your content is inaccessible to users with disabilities, you’re limiting your potential audience. Consider that disabilities can be temporary, like a broken arm, and 70% of seniors are now online—a demographic that often benefits from accessible design principles.
Risk Mitigation: Inaccessible websites can lead to legal complaints under the ADA and other regulations, creating both financial and reputational risks.
Enhanced User Experience: Clear structure, descriptive alt text, and keyboard-friendly navigation improve usability for all users while boosting SEO performance.
Brand Differentiation: Demonstrating commitment to accessibility positions your organization as inclusive and socially responsible.
Implementing Accessibility in Your Editorial Workflow
The challenge isn’t whether to implement accessibility—it’s how to do it efficiently without overwhelming your team or budget.
The Fix-It-Forward Approach
Rather than attempting to overhaul your entire site overnight, we recommend a “fix-it-forward” strategy. This approach ensures all new and updated content meets accessibility standards while gradually improving legacy content. The result? Steady progress without resource strain.
Leverage Open Source Tools
Many CMS platforms offer free accessibility tools that integrate directly into your editorial workflow:
Drupal: Editoria11y Accessibility Checker, Accessibility Scanner, CKEditor Accessibility Auditor
WordPress: WP Accessibility, Editoria11y Accessibility Checker, WP ADA Compliance Check Basic
These tools scan your content and flag common WCAG 2.2 AA issues before publication, transforming accessibility checks into routine quality assurance.
Prioritize High-Impact Changes
Focus your efforts on fixes that significantly improve usability for screen reader and keyboard users:
- Missing image alt text
- Poor heading structure
- Duplicate or unclear link text
- Links that open new windows without warning
- Insufficient color contrast (may require developer collaboration)
Less critical issues can be addressed during routine content updates, spreading the workload over time.
Manage Legacy Content Strategically
Don’t let your content backlog create paralysis. Prioritize high-traffic pages and those supporting key user journeys. Since refreshing legacy content annually is already an SEO best practice, use these updates as opportunities to implement accessibility improvements.
Build Team Capabilities
Make accessibility part of your content culture through targeted education and resources. Provide internal training, quick reference guides, and trusted resources to keep editors confident and informed.
Recommended Learning Resources:
Track Progress and Celebrate Wins
Measure success by tracking pages published with zero critical accessibility issues. Share achievements in editorial meetings to reinforce your team’s impact and maintain momentum.
Scaling Your Accessibility Program
While regular content checks provide immediate value, sustainable accessibility success requires periodic comprehensive assessments and usability testing. If your team lacks bandwidth for advanced testing, consider adding this to your 1-2 year digital roadmap. Consistent attention over time proves more sustainable and cost-effective than attempting massive one-time remediation.
Start with Free Tools: Google Lighthouse provides immediate insights into accessibility issues and actionable remediation guidance.
Advanced Assessment Options: For teams ready to expand their program, tools like SortSite, SiteImprove, and JAWS screen reader testing offer comprehensive assessments. These advanced tools can uncover complex issues beyond content-level checks, though they may require developer collaboration for implementation.
Quarterly Program Goals:
- Regular Google Lighthouse assessments for incremental improvements
- Full-site scans or top-page audits with developer support
- Remediation prioritization based on traffic and business value
- Ongoing WCAG 2.2 AA compliance tracking
Consider engaging someone who navigates the web differently than your team does. This perspective will expand your understanding of accessibility’s real-world impact and inform more effective solutions.
Accessibility as Continuous Improvement
Accessibility isn’t a one-time project—it’s an ongoing commitment to inclusive digital experiences.
By integrating accessibility best practices into your publishing workflow, you’ll build a stronger, more inclusive website that protects your brand, empowers your users, and demonstrates digital leadership.
The fix-it-forward approach transforms what seems like an overwhelming challenge into manageable, sustainable progress.
Ready to Accelerate Your Accessibility Journey?
Explore additional insights from our team:
- More than Mouse Clicks: A Non-Disabled User’s Guide to Accessible Web Navigation
- How Does the European Accessibility Act Affect Your Business?
Ready to take action? Contact Oomph to see how we can support your accessibility journey. We start with targeted accessibility audits that identify your highest-impact opportunities, then collaborate with your team to develop a strategic roadmap that aligns with your internal goals while respecting your resources and team size.
When you’re responsible for your organization’s digital presence, it’s natural to focus on what’s visible: the design, the content, the user experience. But beneath every modern website lies a complex ecosystem of technologies, integrations, and workflows that can either accelerate your team’s success or create hidden friction that slows everything down.
That’s where a technical audit becomes invaluable. It’s not just a diagnostic tool—it’s a strategic opportunity to understand the foundation of your platform and make informed decisions about your digital future.
It’s Like a Home Inspection for Your Website
Think about buying a house. You walk through focusing on the big picture—does the kitchen work for your family? Is there enough space? But a good home inspector looks deeper, checking the foundation, examining the electrical system, and spotting that small leak under the bathroom sink that could become a major problem later.
A technical audit takes the same comprehensive approach to your digital platform. We examine not just what’s working today, but what might impact your team’s ability to execute tomorrow. The goal isn’t to find problems for the sake of finding them—it’s to give you the complete picture you need to plan strategically.
Creating Shared Understanding Across Your Entire Team
One of the most powerful outcomes of a technical audit is alignment. Whether you’re managing internal developers, partnering with an agency, or preparing to issue an RFP, having a clear baseline allows everyone to ask better questions and make more accurate decisions.
A strategic technical audit delivers:
Proactive Problem-Solving: Surface technical issues before they become roadblocks to important campaigns or launches.
Performance Optimization: Identify specific improvements that will measurably enhance user experience and conversion rates.
Workflow Enhancement: Reveal friction points that slow down content updates, campaign launches, or day-to-day management tasks.
Vendor Enablement: Provide partners and potential vendors with the context they need to scope work accurately and ask intelligent questions.
Strategic Planning: Create a foundation for long-term digital strategy decisions, from infrastructure investments to editorial tooling.
The organizations we work with often tell us that a technical audit helped them transition from reactive maintenance to proactive digital platform management—a shift that pays dividends across every initiative.
What We Typically Discover
While every platform is unique, certain patterns emerge across industries and organization types. Technical audits frequently reveal:
Security and Maintenance Opportunities: Outdated software, plugins requiring updates, or access configurations that can be strengthened with minimal effort. This often includes ensuring accessibility compliance meets current standards.
Performance Enhancements: Specific optimizations in areas like image compression, caching strategies, or database queries that directly impact user experience. Modern audits also examine search visibility and performance optimization.
Scalability Considerations: Code or architectural decisions that work fine today but could limit growth or flexibility as your needs evolve. This includes evaluating search infrastructure and international expansion capabilities.
Process Improvements: Gaps in version control, deployment workflows, or change management that create unnecessary risk or slow down development cycles.
Editorial Workflow Optimization: Content management processes that feel cumbersome or inconsistent, often because they evolved organically rather than being designed strategically. For global organizations, this includes reviewing translation and localization systems.
Many of these findings aren’t urgent fixes—they’re strategic insights that become incredibly valuable when you’re planning a redesign, launching a major campaign, or evaluating new partnerships.
When a Technical Audit Delivers Maximum Value
You don’t need to wait for problems to emerge. Technical audits are particularly valuable when:
Taking Over Digital Responsibility: You’ve inherited a platform and need a comprehensive understanding of what you’re working with and where the opportunities lie.
Planning Major Initiatives: Before investing in a redesign, platform migration, or significant feature development, understanding your current foundation prevents costly surprises.
Preparing for Vendor Selection: Whether you’re issuing an RFP or evaluating agencies, giving potential partners accurate technical context leads to better proposals and more realistic timelines.
Developing Digital Strategy: When you’re ready to create a roadmap for digital growth, grounding decisions in technical reality rather than assumptions leads to better outcomes. This is especially important when considering AI integration or generative engine optimization strategies.
Our Approach to Technical Audits
We design our audits to build clarity and confidence, not overwhelm you with technical jargon. Rather than simply delivering a report, we walk through findings with your team, prioritize recommendations based on your specific goals, and translate technical insights into actionable business language you can share with stakeholders.
Our methodology goes beyond code analysis. We examine how your platform supports your current workflows, aligns with your organizational objectives, and positions you for future growth. This combination of technical depth and strategic perspective ensures you get insights that drive real business outcomes.
The audit process focuses on partnership, not judgment.
We’re not looking for flaws to criticize—we’re identifying opportunities to help you and your partners make smarter decisions. The result is visibility into the hidden layers of your digital platform and a foundation for more strategic planning, better technology investments, and sustainable long-term success.
Ready to understand what’s really happening under the hood of your digital platform? Let’s talk about how a technical audit could support your goals and strengthen your team’s ability to execute on your digital vision.
If your Drupal site relies on Acquia Search leveraging Solr, you’re likely facing a migration from Acquia Search to SearchStax. We’ve guided numerous organizations through this transition and want to share our proven approach to help you navigate this change successfully.
Before diving into the migration process, this transition presents an excellent opportunity to reassess your search strategy entirely. While Solr remains a powerful and robust solution, the search landscape has evolved significantly with innovative alternatives now available. For organizations considering broader platform transitions, this moment offers strategic value beyond search improvements. Modern React-based solutions can deliver dramatically faster user experiences. Our recent work with ONS demonstrates this potential—by replacing their Solr solution with Algolia Instant Search, we helped them achieve a 40% improvement in search response times while creating a more intuitive experience for their members.
Why the Move to SearchStax?
Acquia announced earlier this year that they’re sunsetting their Acquia Search offering in 2026, positioning SearchStax as the recommended migration path through their new partnership. This transition offers enhanced search capabilities and more direct control over your search environment through SearchStax’s comprehensive dashboard, providing visibility into Solr server performance, data analysis tools, search preview functionality, and advanced configuration options.
The architectural similarity ensures a seamless end-user experience—Solr remains the foundation, requiring no front-end changes for this migration path while delivering improved administrative control.
Our Proven Migration Framework
Through multiple successful migrations, we’ve developed a structured approach that minimizes risk and ensures smooth transitions. Here’s our step-by-step framework:
Phase 1: Foundation Setup
- Secure access to the SearchStax dashboard for complete environment management
- Install the SearchStax modules, including the critical “Solr to SearchStax Site Search Migration” module
- Configure and commit your basic settings to establish the foundation
Phase 2: Testing and Validation
- Deploy changes to DEV or STAGE environments for comprehensive testing
- Validate search functionality, performance, and user experience
- Identify and resolve any configuration issues before production deployment
Phase 3: Production Implementation
- Push validated changes to production environment
- Execute core migration steps including server migration (Drupal’s SearchStax authentication automatically generates endpoint and token configurations), index migration to transfer existing search indexes, and view switching to activate SearchStax indexes across your site
Phase 4: Configuration Management
- Implement configuration overrides and ignores to ensure environment-specific settings
- Secure sensitive data while maintaining dedicated SearchStax server settings per environment
- Export SearchStax indexes and updated views from production to feature branch
- Commit and deploy changes in your next release cycle
Phase 5: Transition Management
- Maintain Acquia search indexes temporarily for rollback capability
- Monitor performance and user experience during initial transition period
- Complete final cleanup by disabling Acquia search module and migration tools once stability is confirmed
Addressing Technical Challenges
Our experience across multiple migrations has revealed common technical hurdles that require proactive attention. Configuration issues with Boost by Date Processor settings, Highlighted Fields errors during index rebuilding, and Facet configuration mismatches between environments are frequent challenges. The key to success lies in early identification during lower environment testing and leveraging Acquia support resources to resolve issues before they impact production.
Each migration presents unique challenges based on your specific configuration and content structure. Our approach prioritizes thorough testing and validation to surface these issues early, ensuring smooth production deployment.
Strategic Search Optimization
Successful migration extends beyond technical implementation. Understanding your content architecture, user behavior patterns, and business objectives enables you to optimize search effectiveness during the transition. This migration provides an ideal opportunity to evaluate search performance metrics, refine content indexing strategies, and enhance user experience design.
By following this proven framework and preparing for potential challenges, your organization can successfully transition to SearchStax while improving both administrative capabilities and user search experience. The result is a more robust, manageable search solution that positions your site for future growth and enhanced user engagement.
Our comprehensive migration expertise extends beyond search implementations to complete platform transformations, ensuring your digital infrastructure supports your long-term strategic objectives.
Ready to begin your SearchStax migration? Our team has successfully guided organizations through this transition, delivering improved search performance and streamlined administration. Contact us to discuss your specific migration needs and timeline.
In 2025, the way people discover and engage with digital content has shifted dramatically. Traditional Search Engine Optimization (SEO) is no longer the only strategy that brings people to your website. Meet Generative Engine Optimization (GEO), the emerging frontier for content creators and researchers looking to earn visibility through AI-driven platforms like ChatGPT, Google’s Gemini, and Perplexity.
If your organization hasn’t begun adapting its content strategy for GEO, now is a great opportunity. Here’s everything you need to know about what GEO is, why it matters, and how to start optimizing for it.
What is GEO and How Is It Different From SEO?
While SEO focuses on improving your visibility on traditional search engine results pages (SERPs) by using keywords, backlinks, and technical performance, GEO is about making your content the answer in AI-generated responses.
Rather than presenting users with a list of links as typically experienced with a Google Search, GEO centers on AI tools that synthesize information. These platforms use large language models (LLMs) to provide direct answers to a range of questions. Instead of competing for a top 10 ranking on Google, you’re aiming to be cited, summarized, or linked to by tools like Gemini or ChatGPT.
In short: SEO gets you found, GEO gets you featured.
Why GEO Matters in 2025
AI tools are no longer sidekicks to Google. They’re central players in how people research, compare options, and make decisions. As of May 2025, ChatGPT alone receives over 4.5 billion monthly visits, while Perplexity processes over 500 million searches per month. Google remains the dominant force in online search, with billions of daily visits from users worldwide. But with the direct integration of Gemini into search results, the way people find information is changing. Users can now get answers without ever clicking through to your website (this is called a “zero-click search result”).
Consequently, if your content isn’t showing up in AI answers, you’re missing out on a massive and growing segment of online visibility. Depending on what your website offers, this can be especially important for brand recognition and perception, traffic and lead potential, as well as establishing authority and credibility. In 2025, AI summaries are the new front page of search.
How GEO Works: What AI Tools Are Looking For
Each generative engine has its quirks, but several patterns are emerging across platforms:
1. Structure Matters More Than Ever
AI tools rely on clear, structured content. Use schema markup generously, particularly FAQPage, Organization, Article, and Product types. Structured data helps AI understand your content contextually, making it easier to reference in generated answers.
Tip: Google’s Structured Data Markup Helper is a great place to start reviewing your schema.
2. E-E-A-T Principles Still Rule
Google’s Expertise, Experience, Authoritativeness, and Trustworthiness (E-E-A-T) framework, a core concept for SEO, now extends to AI tools like Gemini. Show credentials, cite data, link to reputable sources, and provide content authored by credible experts.
If you have certifications, awards, partnerships, or original research, feature them clearly. This shows your authority in your area of expertise.
3. Conversation > Keywords
GEO is less about keywords and more about natural language. Write in a conversational tone and frame your content in terms of questions and answers. Think: “What are the best family vacation spots in California?” instead of “California vacation destinations.”
4. Content Freshness is Key
AI platforms (especially Perplexity, which indexes content daily) prioritize content that’s up to date. Refresh evergreen posts annually and use a content calendar to help track when to review content. Be sure to prioritize articles with titles like “Top” or “Best,” as these perform well in answer generation, particularly on ChatGPT.
5. Visuals Are Increasingly Important
Gemini and Perplexity are both investing in multimodal search. Media assets like charts, videos, and well-optimized images can increase the chance of being featured. Also make sure your image alt text, captions, and surrounding content are descriptive.
6. Prioritize Performance & Mobile-Responsiveness
Don’t ignore performance or the site’s mobile experience. A site that performs well on mobile will load quickly, display clearly on small screens, and typically avoids frustrating interactions (like unclickable buttons or pop-ups). Poor mobile performance (i.e. slow Core Web Vitals) can hurt your rankings, which in turn reduces your visibility to LLMs that rely on search results as part of their input sources.
Tool-Specific GEO Tips
Gemini (Google)
- Optimize for the Search Generative Experience (SGE) with crawlable content and Core Web Vitals in check.
- Use a hub and spoke content model to build topical authority. (This model organizes content around a central “hub” topic page that then links to related and more detailed “spoke” pages).
- Regularly monitor impressions and click-through rates in Google Search Console. A dip in clicks with high impressions could signal that your content is being used in AI answers.
Perplexity
- With an emphasis on factual accuracy, source transparency, and user control over search scope, sources are essential! For your site, focus on citations and factual, digestible content.
- Use Question & Answer formatting to align with Perplexity’s research focus.
- Include multimedia assets and data points that back up your authority on a subject. And don’t just stop at video and images, charts, diagrams and maps are also great sources.
ChatGPT
- Embrace the feeling of personalization. With an emphasis on providing personalized recommendations to its users, ChatGPT seeks out phrases on websites like “top” or “best” that give the user the feeling of receiving personalized insights.
- Optimize your About Us page so that it clearly articulates your mission and values. ChatGPT often uses this to evaluate trustworthiness and authority.
- Strengthen your backlink profile to compete with high-authority sources like Wikipedia, Reddit, and news outlets frequently cited by the model.
Tracking GEO Performance
A consequence of AI summaries is that websites may see a drop in clicks and visits within their analytics, particularly a decrease in organic traffic month over month. With users getting the answers they need from AI-generated search responses, they may no longer need to visit your website to get information. However, those users who do click through often stay longer and discover more pages than they did previously.
Additionally, websites may also see an increase in impressions or referrals from AI assistants. This data is increasingly important to track.
So even if AI tools don’t always send traffic directly, you can still measure their impact. Here’s how:
- Google Analytics 4 (GA4) Segmentation: Create segments by referral source (e.g., chat.openai.com, perplexity.ai, gemini.google.com) to track AI-specific sessions.
- Landing Page Analysis: AI tools often link deep into your site. Use GA4 to monitor which long-tail pages are receiving AI-generated traffic.
- Google Search Console: Identify FAQ-style queries with high impressions but low CTR. These may indicate your content is being summarized in AI answers.
Action Items for Digital Teams & Clients
- Audit your existing content with these optimization strategies in mind. (Tip: You can even use AI tools like Gemini to identify optimization opportunities for particular pages).
- Update schema across all major content types, especially Q&A and organizational pages.
- Refresh your high-performing or evergreen content regularly, especially pieces tied to seasons, events, or top lists.
- Revise your content strategy to include multimedia assets, structured data, and topic clustering.
- Optimize your About page and author bios to strengthen trust signals for LLMs.
Final Thoughts
Optimizing for GEO isn’t just a trend, it’s a fundamental shift in how people find and interact with content online. As AI-generated answers become a dominant part of the discovery experience, your brand’s ability to show up in these spaces could mean the difference between gaining trust or going unnoticed.
By embracing schema, writing conversationally, and refreshing content with purpose, your digital presence can evolve to meet the moment, one where the best answer often wins over the best ranking.
Ready to optimize your content for AI-powered search? Let’s make it happen.
Today I learned about a military term that has come into the culture: VUCA, which stands for volatility, uncertainty, complexity, and ambiguity. That certainly describes our current times.
All of this VUCA makes me concentrate on what is stable and slow to change. Its easy to get distracted by that which changes quickly and shines in the light. Its harder to be grateful for what changes slowly. Its harder to see what those things might even be.
In the face of AI and the way it will transform all industries (if not now, very soon), its important to remember what AI can not yet do well. Maybe it will learn how to create a facsimile of these traits in the future as it becomes more “human” (trained on human data with all its flaws might mean it has embedded within it those traits we find undeniably human). However, these skills seem like the ones that can help us navigate the VUCA that is life today.
Be Curious
AI can ask follow-up questions for clarification, but it does not (yet) ask questions for its own curiosity. It asks when it has been directed to do something. It does not sit idle and wonder what the world is like beyond the walls of the chat window.
Humans and high-order animals have curiosity. We seek information and naturally have questions about our world — why is the sky blue? why does the wind blow? why do waves crash onto the shore?
In our operations, Oomph prides itself on Discovery. This is our chance to ask the big questions — why does your business work the way it does? why are those your goals? who is your audience you have vs. the audience you want?
In life and work, curiosity is one of our best traits. This means trying new tools, changing our processes and habits for improved outcomes, and exploring something new just to see what it can do. Even with all the VUCA in the world, approaching uncertainty with curiosity keeps us open and engaged with what we can learn next.
Use Judgement
Another important human trait is judgement, and this continues to be invaluable as humans are needed to evaluate AI outputs.
AI is very good at creating dozens, if not hundreds of outputs. In fact, probabilistic (not deterministic) output is the strength and sometimes weakness of AI — you almost never get the same answer twice.
Our human expertise is needed to curate these outputs. We need to discard what is average and unremarkable to find the outputs that are surprising and valuable. We need to use our judgement and experience to find the ideas that are applicable to the client, the project, and the moment. Given the same 100 outputs, the right ones might be a different selection depending on the problem we want to solve and the industry in which it will be applied.
Exude Empathy
In the world of design and creating software for humans, empathy is what drives the decisions we need to make. In the flow of vibe coding, our judgments will drive technical and architectural decisions while empathy drives interface design and product feature decisions. Humans are still the ones who need to find the problems that are worth solving.
The language on the page, the helpfulness of the tooltip, and the order in which the form elements appear are some examples of how empathy drives interactions. Empathy helps team members identify confusion and redundancy.
Further, until we are designing for AI Agents and robots as our product’s primary users, we are designing for humans. This means we need to continue to ask humans for feedback, monitor human behavior on our sites and in our apps, and understand why they make the decisions they make. All of this continues to make empathy an important human trait to cultivate.
Make Connections
Mike Bechtel, Chief Futurist at Deloitte Consulting, gave a talk at SXSW this year about how the future favors polymaths instead of specialists. His argument boils down to this: AI is a specialist at almost anything but what humans have shown over time is that the greatest inventions and insights come from disparate teams putting their expertise together or individuals making new connections between disciplines.
Novel ideas are mash-ups of existing ideas more than brand-new ideas that have never been thought of. And these mash-ups come from curious humans who have broad experience, not deep specialization. They are the ones who can identify and bring the specialists together if need be, but most of all, they can make the connections and see the bigger picture to create new approaches.
Support Culture
No matter how smart AI gets, it doesn’t “read the room.” It doesn’t build relationships between others, react to group dynamics, or pick up on body language. In an ambiguous human way, it does not sense when something “feels off.”
In group settings, humans command culture. AI won’t directly help you build trust with a client. It won’t read the faces in the room or over Zoom and pause for questions. It won’t sense that people are not engaging and reacting, and therefore you need to change a tactic while speaking. AI is interested in the facts and not the feelings.
Broad team culture and the culture that exists between individuals is built and nurtured by the humans within them. AI might help you craft a good sales pitch, internal memo, or provide ice breaker ideas, but in the end, humans deliver it. Mentoring, supporting culture, collaborating, and building trust continue to be human endeavors.
Break Patterns
AI is very good at replicating patterns and what has already been created. AI is very good at using its vast amount of data to emphasize best practices with patterns that are the most prevalent and potentially the most successful. But it won’t necessarily find ways to break existing patterns to create new and disruptive ones.
Asking great questions (being curious), applying our experience and judgement, and doing it all with empathy for the humans we support leads to creative, pattern-breaking solutions that AI has not seen before. Best practices don’t stay the best forever. Changes in technology and our interface with it create new best practices.
The easiest answer (the common denominator that AI may reach for) is not always the best solution. There is a time and a place to repeat common patterns for efficiency, but then there are times when we need to create new patterns. Humans will continue to be the ones who can make that judgement.
Be Human
AI will continue to evolve. It may get better at some of the attributes I mention — or at best, it may get better at looking like it has empathy, supports culture, and mashes existing patterns together to create new ones. But for humans, these traits come more naturally. They don’t have to be trained or prompted to use these traits.
Of all these traits, curiosity may be the most important and impactful one. AI has become our answer-engine, making it less necessary to know it all. But we need to continue to be curious, to wonder about “what if?” AI shouldn’t tell us what to ask, but it should support us in asking deeper questions and finding disparate ideas that could create a new approach.
We no longer need to learn everything. All the answers to what is already known can be provided. It is up to humans to continue with curiosity into what we do not yet know.
The tech industry has never been accused of moving slowly. The exponential explosion of AI tools in 2024, though, sets a new standard for fast-moving. The past few months of 2024 rewrote what happened in the past few years. If you have not been actively paying attention to AI, now is the time to start.
I have been intently watching the AI space for over a year. I started from a place of great skepticism, not willing to internalize the hype until I could see real results. I can now say with confidence that when applied to the correct problem with the right expectations, AI can make significant advancements possible no matter the industry.
In 2024, not only did the large language models get more powerful and extensible, but the tools are being created to solve real business problems. Because of this, skepticism about AI has shifted to cautious optimism. Spurred by the Fortune 500’s investments and early impacts, companies of every shape and size are starting to harness the power of AI for efficiency and productivity gains.
Let’s review what happened in Quarter Four of 2024 as a microcosm of the year in AI.
New Foundational Models in the AI Space
A foundational large language model (LLM) is one which other AI tools can be built from. The major foundational LLMs have been Chat GPT, Claude, Llama, and Gemini, operated by OpenAI & Microsoft, Anthropic, Meta, and Google respectively.
In 2024, additional key players entered the space to create their own foundational models.
Amazon
Amazon has been pumping investments into Anthropic as their operations are huge consumers of AI to drive efficiency. With their own internal foundational LLM, they could remove the need to share their operational data with an external party. Further, like they did with their AWS business, they can monetize their own AI services with their own models. Amazon Nova was launched in early December.
xAI
In May of 2024, X secured funding to start to create and train its own foundational models. Founder Elon Musk was a co-founder of OpenAI. The company announced they would build the world’s largest supercomputer in June and it was operational by December.
Nvidia
In October, AI chip-maker Nvidia announced it own LLM named Nemotron to compete directly with OpenAI and Google — organizations that rely on its chips to train and power their own LLMs.
Rumors of more to come
Apple Intelligence launched slowly in 2024 and uses OpenAI’s models. Industry insiders think it is natural to expect Apple to create its own LLM and position it as a privacy-first, on-device service.
Foundational Model Advancements
While some companies are starting to create their own models, the major players have released advanced tools that can use a range of inputs to create a multitude of outputs:
Multimodal Processing
AI models can now process and understand multiple types of data together, such as images, text, and audio. This allows for more complex interactions with AI tools.
Google’s NotebookLM was a big hit this year for its ability to use a range of data as sources, from Google Docs to PDFs to web links for text, audio, and video. The tool essentially allows the creation of small, custom RAG databases to query and chat with.
Advanced Reasoning
OpenAI’s 01 reasoning model (pronounced “Oh One”) uses step-by-step “Chain of Thought” to solve complex problems, including math, coding, and scientific tasks. This has led to AI tools that can draw conclusions, make inferences, and form judgments based on information, logic, and experience. The queries take longer but are more accurate and provide more depth.
Google’s Deep Research is a similar product that was released to Gemini users in December.
Enhanced Voice Interaction
More and more AI tools can engage in natural and context-aware voice interactions — think Siri, but way more useful. This includes handling complex queries, understanding different tones and styles, and even mimicking personalities such as Santa Claus.
Vision Capabilities
AI can now “see” and interpret the world through cameras and visual data. This includes the ability to analyze images, identify objects, and understand visual information in real-time. Examples include Meta’s DINOv2, OpenAI’s GPT-4o, and Google’s PaliGemma.
AI can also interact with screen displays on devices, allowing for a new level of awareness of sensory input. OpenAI’s desktop app for Mac and Windows is contextually aware of what apps are available and in focus. Microsoft’s Co-pilot Vision integrates with the Edge browser to analyze web pages as users browse. Google’s Project Mariner prototype allows Gemini to understand screen context and interact with applications.
While still early and fraught with security and privacy implications, the technology will lead to more advancements for “Agentic AI” which will continue to grow in 2025.
Agentic Capabilities
AI models are moving towards the ability to take actions on behalf of users. No longer confined to chat interfaces alone, these new “Agents” will perform tasks autonomously once trained and set in motion.
Note: Enterprise leader SalesForce launched AgentForce in September 2024. Despite the name, these are not autonomous Agents in the same sense. Custom agents must be trained by humans, given instructions, parameters, prompts, and success criteria. Right now, these agents are more like interns that need management and feedback.
Specialization
2024 also saw an increase in models designed for specific domains and tasks. With reinforcement fine-tuning, companies are creating tools for legal, healthcare, finance, stocks, and sports.
Examples include Sierra, who offers a specifically trained customer service platform, and LinkedIn agents as hiring assistants.
What this all means for 2025
It’s clear that AI models and tools will continue to advance, and businesses that embrace AI will be in a better position to thrive. To be successful, businesses need an experimental mindset of continuous learning and adaptation:
- Focus on AI Literacy — Ensure your team understands AI and its capabilities. Start with use cases that add value immediately.
- Prioritize Data Quality — AI models need high-quality, relevant data to be effective. Start cleaning and preparing your internal data before implementing AI at scale.
- Combine AI and Human Expertise — Use AI to augment human capabilities, not replace them. Think of AI as a junior employee who will require input, alignment, and reinforcement.
- Experiment and Iterate — Be willing to try new approaches and adapt based on results. Include measurement in your plans — collect data before and after to benchmark progress.
- Embrace Ethical AI — Implement policies to ensure AI is used responsibly and ethically. Investigate ways the company can offset carbon and support cleaner energy, as AI tools require more electricity than non-AI tools. Understand hallucinations and the new, more complex “scheming” in reasoning models problem.
- Prepare for Change — Understand that technology is constantly evolving, and business models will need to adapt.
While the models will continue to get better into 2025, don’t wait to explore AI. Even if the existing models never improve, they are powerful enough to drive significant gains in business. Now is the time to implement AI in your business. Choose a model that makes sense and is low-friction — if you are an organization that uses Microsoft products, start with a trial of AI add-ons for office tools, for example. Start accumulating experience with the tools at hand, and then expand to include multiple models to evaluate more complex AI options that may have greater business impact. It almost doesn’t matter which you choose, as long as you get started.
Oomph has started to experiment with AI ourselves and Drupal has exciting announcements about integrating AI tools into the authoring experience. If you would like more information, please reach out for a chat.
Oomph has been quiet about our excitement for artificial intelligence (A.I.). While the tech world has exploded with new A.I. products, offerings, and add-ons to existing product suites, we have been formulating an approach to recommend A.I.-related services to our clients.
One of the biggest reasons why we have been quiet is the complexity and the fast-pace of change in the landscape. Giant companies have been trying A.I. with some loud public failures. The investment and venture capitalist community is hyped on A.I. but has recently become cautious as productivity and profit have not been boosted. It is a familiar boom-then-bust of attention that we have seen before — most recently with AR/VR after the Apple Vision Pro five months ago and previously with the Metaverse, Blockchain/NFTs, and Bitcoin.
There are many reasons to be optimistic about applications for A.I. in business. And there continue to be many reasons to be cautious as well. Just like any digital tool, A.I. has pros and cons and Oomph has carefully evaluated each. We are sharing our internal thoughts in the hopes that your business can use the same criteria when considering a potential investment in A.I.
Using A.I.: Not If, but How
Most digital tools now have some kind of A.I. or machine-learning built into them. A.I. has become ubiquitous and embedded in many systems we use every day. Given investor hype for companies that are leveraging A.I., more and more tools are likely to incorporate A.I.
This is not a new phenomenon. Grammarly has been around since 2015 and by many measures, it is an A.I. tool — it is trained on human written language to provide contextual corrections and suggestions for improvements.
Recently, though, embedded A.I. has exploded across markets. Many of the tools Oomph team members use every day have A.I. embedded in them, across sales, design, engineering, and project management — from Google Suite and Zoom to Github and Figma.
The market has already decided that business customers want access to time-saving A.I. tools. Some welcome these options, and others will use them reluctantly.
Either way, the question has very quickly moved from should our business use A.I. to how can our business use A.I. tools responsibly?
The Risks that A.I. Pose
Every technological breakthrough comes with risks. Some pundits (both for and against A.I. advancements) have likened its emergence to the Industrial Revolution of the early 20th century. And a high-level of positive significance is possible, while the cultural, societal, and environmental repercussions could also follow a similar trajectory.
A.I. has its downsides. When evaluating A.I. tools as a solution to our client’s problems, we keep this list of drawbacks and negative effects handy, so that we may review it and think about how to mitigate their negative effects:
- A.I. is built upon biased and flawed data
- Bias & flawed data leads to the perpetuation of stereotypes
- Flawed data leads to Hallucinations & harms Brands
- Poor A.I. answers erode Consumer Trust
- A.I.’s appetite for electricity is unsustainable
We have also found that our company values are a lens through which we can evaluate new technology and any proposed solutions. Oomph has three cultural values that form the center of our approach and our mission, and we add our stated 1% For the Planet commitment to that list as well:
- Smart
- Driven
- Personal
- Environmentally Committed
For each of A.I.’s drawbacks, we use the lens of our cultural values to guide our approach to evaluating and mitigating those potential ill effects.
A.I. is built upon biased and flawed data
At its core, A.I. is built upon terabytes of data and billions, if not trillions, of individual pieces of content. Training data for Large Language Models (LLMs) like Chat GPT, Llama, and Claude encompass mostly public content as well as special subscriptions through relationships with data providers like the New York Times and Reddit. Image generation tools like Midjourney and Adobe Firefly require billions of images to train them and have skirted similar copyright issues while gobbling up as much free public data as they can find.
Because LLMs require such a massive amount of data, it is impossible to curate those data sets to only what we may deem as “true” facts or the “perfect” images. Even if we were able to curate these training sets, who makes the determination of what to include or exclude?
The training data would need to be free of bias and free of sarcasm (a very human trait) for it to be reliable and useful. We’ve seen this play out with sometimes hilarious results. Google “A.I. Overviews” have told people to put glue on pizza to prevent the cheese from sliding off or to eat one rock a day for vitamins & minerals. Researchers and journalists traced these suggestions back to the training data from Reddit and The Onion.
Information architects have a saying: “All Data is Dirty.” It means no one creates “perfect” data, where every entry is reviewed, cross-checked for accuracy, and evaluated by a shared set of objective standards. Human bias and accidents always enter the data. Even the simple act of deciding what data to include (and therefore, which data is excluded) is bias. All data is dirty.
Bias & flawed data leads to the perpetuation of stereotypes
Many of the drawbacks of A.I. are interrelated — All data is dirty is related to D.E.I. Gender and racial biases surface in the answers A.I. provides. A.I. will perpetuate the harms that these biases produce as they become easier and easier to use and more and more prevalent. These harms are ones which society is only recently grappling with in a deep and meaningful way, and A.I. could roll back much of our progress.
We’ve seen this start to happen. Early reports from image creation tools discuss a European white male bias inherent in these tools — ask it to generate an image of someone in a specific occupation, and receive many white males in the results, unless that occupation is stereotypically “women’s work.” When AI is used to perform HR tasks, the software often advances those it perceives as males more quickly, and penalizes applications that contain female names and pronouns.
The bias is in the data and very, very difficult to remove. The entirety of digital written language over-indexes privileged white Europeans who can afford the tools to become authors. This comparably small pool of participants is also dominantly male, and the content they have created emphasizes white male perspectives. To curate bias out of the training data and create an equally representative pool is nearly impossible, especially when you consider the exponentially larger and larger sets of data new LLM models require for training.
Further, D.E.I. overflows into environmental impact. Last fall, the Fifth National Climate Assessment outlined the country’s climate status. Not only is the U.S. warming faster than the rest of the world, but they directly linked reductions in greenhouse gas emissions with reducing racial disparities. Climate impacts are felt most heavily in communities of color and low incomes, therefore, climate justice and racial justice are directly related.
Flawed data leads to “Hallucinations” & harms Brands
“Brand Safety” and How A.I. can harm Brands
Brand safety is the practice of protecting a company’s brand and reputation by monitoring online content related to the brand. This includes content the brand is directly responsible for creating about itself as well as the content created by authorized agents (most typically customer service reps, but now AI systems as well).
The data that comes out of A.I. agents will reflect on the brand employing the agent. A real life example is Air Canada. The A.I. chatbot gave a customer an answer that contradicted the information in the URL it provided. The customer chose to believe the A.I. answer, while the company tried to say that it could not be responsible if the customer didn’t follow the URL to the more authoritative information. In court, the customer won and Air Canada lost, resulting in bad publicity for the company.
Brand safety can also be compromised when a 3rd party feeds A.I. tools proprietary client data. Some terms and condition statements for A.I. tools are murky while others are direct. Midjourney’s terms state,
“By using the Services, You grant to Midjourney […] a perpetual, worldwide, non-exclusive, sublicensable no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute text and image prompts You input into the Services”
Midjourney’s Terms of Service Statement
That makes it pretty clear that by using Midjourney, you implicitly agree that your data will become part of their system.
The implication that our client’s data might become available to everyone is a huge professional risk that Oomph avoids. Even using ChatGPT to provide content summaries on NDA data can open hidden risks.
What are “Hallucinations” and why do they happen?
It’s important to remember how current A.I. chatbots work. Like a smartphone’s predictive text tool, LLMs form statements by stitching together words, characters, and numbers based on the probability of each unit succeeding the previously generated units. The predictions can be very complex, adhering to grammatical structure and situational context as well as the initial prompt. Given this, they do not truly understand language or context.
At best, A.I. chatbots are a mirror that reflects how humans sound without a deep understanding of what any of the words mean.
A.I. systems are trying its best to provide an accurate and truthful answer without a complete understanding of the words it is using. A “hallucination” can occur for a variety of reasons and it is not always possible to trace their origins or reverse-engineer them out of a system.
As many recent news stories state, hallucinations are a huge problem with A.I. Companies like IBM and McDonald’s can’t get hallucinations under control and have pulled A.I. from their stores because of the headaches they cause. If they can’t make their investments in A.I. pay off, it makes us wonder about the usefulness of A.I. for consumer applications in general. And all of these gaffes hurt consumer’s perception of the brands and the services they provide.
Poor A.I. answers erode Consumer Trust
The aforementioned problems with A.I. are well-known in the tech industry. In the consumer sphere, A.I. has only just started to break into the public consciousness. Consumers are outcome-driven. If A.I. is a tool that can reliably save them time and reduce work, they don’t care how it works, but they do care about its accuracy.
Consumers are also misinformed or have a very surface level understanding of how A.I. works. In one study, only 30% of people correctly identified six different applications of A.I. People don’t have a complete picture of how pervasive A.I.-powered services already are.
The news media loves a good fail story, and A.I. has been providing plenty of those. With most of the media coverage of A.I. being either fear-mongering (“A.I. will take your job!”) or about hilarious hallucinations (“A.I. suggests you eat rocks!”), consumers will be conditioned to mistrust products and tools labeled “A.I.”
And for those who have had a first-hand experience with an A.I. tool, a poor A.I. experience makes all A.I. seem poor.
A.I.’s appetite for electricity is unsustainable
The environmental impact of our digital lives is invisible. Cloud services that store our lifetime of photographs sound like featherly, lightweight repositories that are actually giant, electricity-guzzling warehouses full of heat-producing servers. Cooling these data factories and providing the electricity to run them are a major infrastructure issue cities around the country face. And then A.I. came along.
While difficult to quantify, there are some scientists and journalists studying this issue, and they have found some alarming statistics:
- Training GPT-3 required more than 1,200 MWh which led to 500 metric tons of greenhouse gas emissions — equivalent to the amount of energy used for 1 million homes in one hour and the emissions of driving 1 million miles. GPT-4 has even greater needs.
- Research suggests a single generative A.I. query consumes energy at four or five times the magnitude of a typical search engine request.
- Northern Virginia needs the equivalent of several large nuclear power plants to serve all the new data centers planned and under construction.
- In order to support less consumer demand on fossil fuels (think electric cars, more electric heat and cooking), power plant executives are lobbying to keep coal-powered plants around for longer to meet increased demands. Already, soaring power consumption is delaying coal plant closures in Kansas, Nebraska, Wisconsin, and South Carolina.
- Google emissions grew 48% in the past five years in large part because of its wide deployment of A.I.
While the consumption needs are troubling, quickly creating more infrastructure to support these needs is not possible. New energy grids take multiple years and millions if not billions of dollars of investment. Parts of the country are already straining under the weight of our current energy needs and will continue to do so — peak summer demand is projected to grow by 38,000 megawatts nationwide in the next five years.
While a data center can be built in about a year, it can take five years or longer to connect renewable energy projects to the grid. While most new power projects built in 2024 are clean energy (solar, wind, hydro), they are not being built fast enough. And utilities note that data centers need power 24 hours a day, something most clean sources can’t provide. It should be heartbreaking that carbon-producing fuels like coal and gas are being kept online to support our data needs.
Oomph’s commitment to 1% for the Planet means that we want to design specific uses for A.I. instead of very broad ones. The environmental impact of A.I.’s energy demands is a major factor we consider when deciding how and when to use A.I.
Using our Values to Guide the Evaluation of A.I.
As we previously stated, our company values provide a lens through which we can evaluate A.I. and look to mitigate its negative effects. Many of the solutions cross over and mitigate more than one effect and represent a shared commitment to extracting the best results from any tool in our set
Smart
- Limit direct consumer access to the outputs of any A.I. tools, and put a well-trained human in the middle as curator. Despite the pitfalls of human bias, it’s better to be aware of them rather than allow A.I. to run unchecked
- Employ 3rd-party solutions with a proven track-record of hallucination reduction
Driven
- When possible, introduce a second proprietary dataset that can counterbalance training data or provide additional context for generated answers that are specific to the client’s use case and audience
- Restrict A.I. answers when qualifying, quantifying, or categorizing other humans, directly or indirectly
Personal
- Always provide training to authors using A.I. tools and be clear with help text and microcopy instructions about the limitations and biases of such datasets
1% for the Planet
- Limit the amount of A.I. an interface pushes at people without first allowing them to opt in — A.I. should not be the default
- Leverage “green” data centers if possible, or encourage the client using A.I. to purchase carbon offset credits
In Summary
While this article feels like we are strongly anti-A.I., we still have optimism and excitement about how A.I. systems can be used to augment and support human effort. Tools created with A.I. can make tasks and interactions more efficient, can help non-creatives jumpstart their creativity, and can eventually become agents that assist with complex tasks that are draining and unfulfilling for humans to perform.
For consumers or our clients to trust A.I., however, we need to provide ethical evaluation criteria. We can not use A.I. as a solve-all tool when it has clearly displayed limitations. We aim to continue to learn from others, experiment ourselves, and evaluate appropriate uses for A.I. with a clear set of criteria that align with our company culture.
To have a conversation about how your company might want to leverage A.I. responsibly, please contact us anytime.
Additional Reading List
- “The Politics of Classification” (YouTube). Dan Klyn, guest lecture at UM School of Information Architecture. 09 April 2024. A review of IA problems vs. AI problems, how classification is problematic, and how mathematical smoothness is unattainable.
- “Models All the Way Down.” Christo Buschek and Jer Thorp, Knowing Machines. A fascinating visual deep dive into training sets and the problematic ways in which these sets were curated by AI or humans, both with their own pitfalls.
- “AI spam is already starting to ruin the internet.” Katie Notopoulos, Business Insider, 29 January 2024. When garbage results flood Google, it’s bad for users — and Google.
- Racial Discrimination in Face Recognition Technology, Harvard, 24 October 2020. The title of this article explains itself well.
- Women are more likely to be replaced by AI, according to LinkedIn, Fast Company, 04 April 2024. Many workers are worried that their jobs will be replaced by artificial intelligence, and a growing body of research suggests that women have the most cause for concern.
- Brand Safety and AI, Writer.com. An overview of what brand safety means and how it is usually governed.
- AI and designers: the ethical and legal implications, UX Design, 25 February 2024. Not only can using training data potentially introduce legal troubles, but submitting your data to be processed by A.I. does as well.
- Can Generative AI’s Hallucination Problem be Overcome? Louis Poirier, C3.ai. 31 August 2023. A company claims to have a solution for A.I. hallucinations but doesn’t completely describe how in their marketing.
- Why AI-generated hands are the stuff of nightmares, explained by a scientist, Science Focus, 04 February 2023. Whether it’s hands with seven fingers or extra long palms, AI just can’t seem to get it right.
- Sycophancy in Generative-AI Chatbots, NNg. 12 January 2024. Human summary: Beyond hallucinations, LLMs have other problems that can erode trust: “Large language models like ChatGPT can lie to elicit approval from users. This phenomenon, called sycophancy, can be detected in state-of-the-art models.”
- Consumer attitudes towards AI and ML’s brand usage U.S. 2023. Valentina Dencheva, Statistica. 09 February 2023
- What the data says about Americans’ views of artificial intelligence. Pew Research Center. 21 November 2023
- Exploring the Spectrum of “Needfulness” in AI Products. Emily Campbull, The Shape of AI. 28 March 2024
- AI’s Impact On The Future Of Consumer Behavior And Expectations. Jean-Baptiste Hironde, Forbes. 31 August 2023.
- Is generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins. The Conversation. 23 May 2023
Everyone’s been saying it (and, frankly, we tend to agree): We are currently in unprecedented times. It may feel like a cliche. But truly, when you stop and look around right now, not since the advent of the first consumer-friendly smartphone in 2008 has the digital web design and development industry seen such vast technological advances.
A few of these innovations have been kicking around for decades, but they’ve only moved into the greater public consciousness in the past year. Versions of artificial intelligence (AI) and chatbots have been around since the 1960s and even virtual reality (VR)/augmented reality (AR) has been attempted with some success since the 1990s (That Starner). But now, these technologies have reached a tipping point as companies join the rush to create new products that leverage AI and VR/AR.
What should we do with all this change? Let’s think about the immediate future for a moment (not the long-range future, because who knows what that holds). We at Oomph have been thinking about how we can start to use this new technology now — for ourselves and for our clients. Which ideas that seemed far-fetched only a year ago are now possible?
For this article, we’ll take a closer look at VR/AR, two digital technologies that either layer on top of or fully replace our real world.
VR/AR and the Vision Pro
Apple’s much-anticipated launch into the headset game shipped in early February 2024. With it came much hype, most centered around the price tag and limited ecosystem (for now). But after all the dust has settled, what has this flagship device told us about the future?
Meta, Oculus, Sony, and others have been in this space since 2017, but the Apple device has debuted a better experience in many respects. For one, Apple nailed the 3D visuals, using many cameras and low latency to reproduce a digital version of the real world around the wearer— in real time. All of this tells us that VR headsets are moving beyond gaming applications and becoming more mainstream for specific types of interactions and experiences, like virtually visiting the Eiffel Tower or watching the upcoming Summer Olympics.
What Is VR/AR Not Good At?
Comfort
Apple’s version of the device is large, uncomfortable, and too heavy to wear for long. And its competitors are not much better. The device will increasingly become smaller and more powerful, but for now, wearing one as an infinite virtual monitor for the entire workday is impossible.
Space
VR generally needs space for the wearer to move around. The Vision Pro is very good at overlaying virtual items into the physical world around the wearer, but for an application that requires the wearer to be fully immersed in a virtual world, it is a poor experience to pantomime moving through a confined space. Immersion is best when the movements required to interact are small or when the wearer has adequate space to participate.
Haptics
“Haptic” feedback is the sense that physical objects provide. Think about turning a doorknob: You feel the surface, the warmth or coolness of the material, how the object can be rotated (as opposed to pulled like a lever), and the resistance from the springs.
Phones provide small amounts of haptic feedback in the form of vibrations and sounds. Haptics are on the horizon for many VR platforms but have yet to be built into headset systems. For now, haptics are provided by add-on products like this haptic gaming chair.
What Is VR/AR Good For?
Even without haptics and free spatial range, immersion and presence in VR is very effective. It turns out that the brain only requires sight and sound to create a believable sense of immersion. Have you tried a virtual roller coaster? If so, you know it doesn’t take much to feel a sense of presence in a virtual environment.
Live Events
VR and AR’s most promising applications are with live in-person and televised events. In addition to a flat “screen” of the event, AR-generated spatial representations of the event and ways to interact with the event are expanding. A prototype video with Formula 1 racing is a great example of how this application can increase engagement with these events.
Imagine if your next virtual conference were available in VR and AR. How much more immersed would you feel?
Museum and Cultural Institution Experiences
Similar to live events, AR can enhance museum experiences greatly. With AR, viewers can look at an object in its real space — for example, a sarcophagus would actually appear in a tomb — and access additional information about that object, like the time and place it was created and the artist.
Museums are already experimenting with experiences that leverage your phone’s camera or VR headsets. Some have experimented with virtually showing artwork by the same artist that other museums own to display a wider range of work within an exhibition.
With the expansion of personal VR equipment like the Vision Pro, the next obvious step is to bring the museum to your living room, much like the National Gallery in London bringing its collection into public spaces (see bullet point #5).
Try Before You Buy (TBYB)
Using a version of AR with your phone to preview furniture in your home is not new. But what other experiences can benefit from an immersive “try before you buy” experience?
- Test-drive a new car with VR, or experience driving a real car on a real track in a mixed-reality game. As haptic feedback becomes more prevalent, the experience of test-driving will become even closer to the real thing.
- Even small purchases have been using VR and AR successfully to trial their products, including AR for fashion retail, eyeglass virtual try-ons, and preview apps for cosmetics. Even do-it-yourself retailer Lowe’s experimented with fully haptic VR in 2018. But those are all big-name retailers. The real future for VR/AR-powered TBYB experiences will allow smaller companies to jump into the space, like Shopify enabled for its merchants.
- Visit destinations before traveling. With VR, you could visit fragile ecosystems without affecting the physical environment or get a sense of the physical space before traveling to a new spot. Visitors who require special assistance could preview the amenities beforehand. Games have been developed for generic experiences like deep sea diving, but we expect more specific travel destinations to provide VR experiences of their own, like California’s Redwood Forest.
What’s Possible With VR/AR?
The above examples of what VR/AR is good at are just a few ways the technology is already in use — each of which can be a jumping-off point for leveraging VR/AR for your own business.
But what are some new frontiers that have yet to be fully explored? What else is possible?
- What if a digital sculptor or 3D model maker could create new three-dimensional models in a three-dimensional virtual space? The application for architects and urban planners is just as impactful.
- What if medical training could be immersive, anatomically accurate, and reduce the need for cadavers? What if rare conditions could be simulated to increase exposure and aid in accurate diagnoses?
- What if mental health disorders could be treated with the aid of immersive virtual environments? Exposure therapy can aid in treating and dealing with anxiety, depression, and PTSD.
- What if highly skilled workers could have technical mentors virtually assist and verify the quality of a build? Aerospace, automotive, and other manufacturing industry experts could visit multiple locations virtually and go where they’re needed most.
- What if complex mathematic-based sciences could provide immersive, data-manipulative environments for exploration? Think of the possibilities for fields like geology, astronomy, and climate change.
- What if movies were told from a more personal point of view? What if the movie viewer felt more like a participant? How could someone’s range of experiences expand with such immersive storytelling?
Continue the AR/VR Conversation
The Vision Pro hasn’t taken the world by storm, as Apple likely hoped. It may still be too early for the market to figure out what AR/VR is good for. But we think it won’t go away completely, either. With big investments like Apple’s, it is reasonable to assume the next version will find a stronger foothold in the market.
Here at Oomph, we’ll keep pondering and researching impactful ways that tomorrow’s technology can help solve today’s problems. We hope these ideas have inspired some of your own explorations, and if so, we’d love to hear more about them.
Drop us a line and let’s chat about how VR/AR could engage your audience.
High-quality content management systems (CMS) and digital experience platforms (DXP) are the backbone of modern websites, helping you deliver powerful, personalized user experiences. The catch? You have to pick your platform first.
At Oomph, we have a lot of love for open-source platforms like Drupal and WordPress. Over the years, we’ve also built applications for our clients using headless CMS tools, like Contentful and CosmicJS. The marketplace for these solutions continues to grow exponentially, including major players like Adobe Experience Manager, Sitecore, and Optimizely.
With so many options, developers and non-developers with a project on the horizon typically start by asking themselves, “Which CMS or DXP is the best fit for my website or application?” While that is no doubt an excellent question to consider, I think it’s equally important to ask, “Who is going to implement the solution?”
CMS/DXP Solutions Are More Alike Than You Might Think
I recently attended the annual Healthcare Internet Conference and spoke with quite a few healthcare marketers about their CMS tools. I noticed a common thread: Many people think their CMS (some of which I mentioned above) is hard to use and doesn’t serve them well.
That may very well be the case. Not all CMS tools are created equal; some are better suited for specific applications. However, most modern CMS and DXP tools have many of the same features in common, they just come at different price points. So here’s the multi-million dollar question: If most of these products provide access to the same or similar tools, why are so many customers displeased with them?
Common Challenges of CMS/DXP Implementation
Often, we find that CMS users get frustrated because the tool they chose wasn’t configured to meet their specific needs. That doesn’t necessarily mean that it was set up incorrectly. That’s the beauty of many of today’s CMS and DXP products: They don’t take a one-size-fits-all approach. Instead, they allow for flexibility and customization to ensure that each customer gets the most out of the product.
While enticing, that flexibility also burdens the user with ensuring that their system is implemented effectively for their specific use case. In our experience, implementation is the make-or-break of a website development project. These are just a handful of things that can derail the process:
- The implementation partner didn’t fully understand how their client works and configure features accordingly.
- The demands of user experience overshadowed the needs of content editors and admins.
- Hefty licensing fees ate away at the budget, leaving behind funds that don’t quite cover a thorough implementation.
- The project was rushed to meet a tight deadline.
- The CMS introduces new features over time that add complexity to the admin or editing experience.
- Old features get sunsetted as new capabilities take their place.
Most of the work we do at Oomph is to help our clients implement new websites and applications using content management systems like Drupal. We have decades of combined experience helping our clients create the ideal user experience for their target audience while also crafting a thoughtful content editing and admin experience that is easy to use.
But what does that look like in practice?
4 Steps for a Successful CMS Implementation
Implementation can be the black box of setting up your CMS: You don’t know what you don’t know. So, we like to get our clients into a demo environment as soon as possible to help them better understand what they need from their CMS. Here’s how we use it to navigate successful CMS implementation:
- Assess the Capabilities of the CMS
The first step can be the most simple at face value. Consider what the CMS needs to do for you, then find a CMS that includes all of those features. Content modeling (more on that below) is a key part of that process, but so is auditing your team’s abilities.
Some teams may be developer-savvy and can handle less templated content-authoring features. Others may need a much more drag-and-drop experience. Either use case is normal and acceptable, but what matters is that you identify your needs and find both a CMS and an implementation process that meets them. That leads us to the next point.
- Test-Drive the CMS Early and Often
You wouldn’t buy a car without test-driving it first. Yet we find that people are often more than willing to license a CMS without looking under the hood.
Stepping into the CMS for a test drive is a huge part of getting the content editing experience right. We’ve been designing and engineering websites and platforms using CMS tools for well over a decade, and we’ve learned a thing or two along the way about good content management and editing experiences.
Even with out-of-the-box, vanilla Drupal, the sky’s the limit for how you can configure it. But that also means that nothing is configured, and it can be difficult to get a sense of how best to configure and use it. Rather than diving into the deep end, we work with our clients to test the waters. We immediately set up a project sandbox that offers pre-configured content types, allowing you to enter content and play with a suite of components within the sleek drag-and-drop interface.
- Align User Experience with Content Authoring
Beyond pre-configured content and components, our sandbox sites include a stylish, default theme. The idea is to give you a taste both of what your live site could look like and what your content authoring experience might be. Since so many teams struggle to balance those two priorities, this can be a helpful way to figure out how your CMS can give you both.
- Finalize Your Features & Capabilities
While a demo gives you a good idea of the features you’ll need, it might include features you don’t. But discovering where our pre-built options aren’t a good fit is a good thing — it helps us understand exactly what YOUR TEAM does and does not need.
Our goal is to give you something tangible to react to, whether that’s love at first type or a chance to uncover capabilities that would serve you better. We’ve found this interactive yet structured process is the CMS silver bullet that leads to a better outcome.
Content Modeling
Another key part of our project workflow is what we call content modeling. During this phase, we work with you to identify the many content types you’ll have on your website or application. Then, we can visualize them in a mapping system to determine things like:
- What relationships exist between these different content types?
- Who should have access to a content type, and what governance should be in place to ensure all content is accurate, on brand, and approved for publishing?
- What features do you need to support content at every level? For example, at the field level, do you need a drop-down with predefined values that only certain people can edit, or do you need an open-text field a content editor can customize?
With a solid content model in place, we can have a higher level of confidence that our CMS implementation will create the right content editing experience for your team. From there, we actually implement the content model in the CMS as soon as possible so that you can test it out and we can make refinements before getting too far along in the process.
Content Moderation & Governance
Many clients tell us they either have too much or too little control over their content. In some cases, their content management system is so templated or rigid that marketing teams can’t quickly spin up landing pages and instead have to rely on development teams to assist. Other teams have too much freedom, allowing employees to easily deploy content that hasn’t been approved by the appropriate team members or strays from company brand standards.
Here at Oomph, our mantra is balance. A good content editing process needs both flexibility and governance, so teams can create content when they need to, but avoid publishing content that doesn’t meet company standards. Through discovery, we work with clients to determine which content types need flexibility and which ones don’t.
If a content type needs to be flexible, we create a framework that allows for agility while still ensuring that users can only select approved colors, font types, and font sizes. We also identify which content needs to be held in moderation and approved before it can be published on the website.
Taking the time to discuss governance in advance creates a CMS experience that strikes the right balance between marketing freedom and brand adherence.
Implementation Turns a Good CMS Into a Great One
Modern CMS/DXP solutions have mind-blowing features, and they will only continue to get more complex over time. But the reality is that while picking a CMS that has the features you need is important, how it’s configured and implemented might matter even more. After all, how helpful is it to have a CMS with embedded artificial intelligence if making simple copy updates to your home page is a nightmare?
Implementation is the “it” factor that makes the difference between a CMS you love and one you’d rather do your job without.
Interested in solving your CMS headaches with better implementation? Let’s talk.
So much of healthcare happens in person. But as the pressure to connect online continues to climb, what are the challenges you face as a healthcare marketer — and the opportunities you’d love to capitalize on?
Whatever they are, chances are that attendees of the most recent Healthcare Internet Conference (HCIC) can relate. HCIC brings together marketers and digital leaders to explore the unique and sometimes unexpected ways digital innovation is shaping the industry.
Though this was Oomph’s first time attending, HCIC has actually been around since 1996. The tight-knit community that’s formed over the past few decades offered a safe space for candid conversations about navigating digital in a post-pandemic world. Here are five topics that ruled those conversations, how marketers like you are approaching them, and what we see as the biggest opportunities for each.
1. To Adopt or Not To Adopt Artificial Intelligence (AI)
A whopping 86% of healthcare companies use some form of AI. But despite the number of organizations adopting AI for everything from IT operations to workforce management, healthcare marketers are still largely stuck in an AI gray zone.
Many HCIC attendees shared that they were unsure which AI tools were ready to use today or if using AI could introduce regulatory, privacy, and ethical concerns. Would using AI-generated photos in marketing misrepresent patients or providers? Can chatbots effectively and appropriately provide support beyond basic admin and billing functions?
While some organizations are already building their own tools (a diagnostic AI to help call center employees determine when a caller should go to urgent care caught our eye), others are interested in out-of-the-box solutions.
Our take: Proceed with caution. AI can revolutionize patient care and operations, but it can also introduce costly and reputationally damaging privacy and regulatory issues. If you aren’t well-versed in compliance, work with a partner who is to ensure your AI actually helps — not hurts.
2. How To Combat Skyrocketing Employee Turnover
The 2020s will go down in history as one of the most difficult decades to work in healthcare. Kicked off by the COVID-19 pandemic, employee attrition only continues to rise as more employees enter retirement or simply burn out.
While exact attrition rates vary by the healthcare segment, data from Oracle shows that hospitals lose nearly 20% of their employees every year. For nursing homes, that number skyrockets to 94%.
Given that the cost of replacing an employee is between six and nine months of that employee’s salary, HCIC attendees were understandably interested in swapping ideas to boost employee retention. An intranet was a fairly universal solution, but the question of what makes a truly effective intranet remained.
Our take: Embrace personalization. Talk to your employees, understand their needs, then build custom features and integrations that meet them. We saw firsthand through our work with Rhode Island-based health system Lifespan that this is an effective way to build community and engagement, both of which are key to retention.
3. The Eternal Quest for Patient Acquisition
There are two questions that keep most healthcare organizations awake at night: How do you find patients? And, once you’ve found them, how do you keep them?
Attendees almost universally agreed that healthcare is a long way from creating seamless experiences that keep patients coming back. Many systems are fragmented, regulated, or outdated, creating barriers to patient care that patients are all too happy to leave behind.
Our take: We think healthcare organizations can take a page out of other industries’ books here. Like any industry, a well-designed user experience (UX) is the foundation for interactions that delight patients.
4. Personalizing the Patient Experience
On the topic of patient experience, one of the most talked-about strategies was personalization. While personalization has long been a favorite technique of ours, we were encouraged to hear the number of HCIC attendees who shared our focus.
Many saw landing pages as the “front door” to the digital patient experience and understood that personalization could level up those interactions. We also heard excitement around combining personalization with integration — from using implicit data from the patient’s online actions to explicit data from Epic’s MyChart to personalize the information that users see.
Our take: In our experience, adding a digital or content experience platform to your content management system (CMS) can do a lot of the heavy lifting for you. But like with anything in healthcare, the key is operating within privacy and regulatory restraints. Be sure to work with an implementation partner who’s equally skilled in technology and compliance.
5. Finding the Right CMS
Content management systems (CMS) aren’t always a hot topic at conferences, but we were pleasantly surprised by how often they came up in conversation at HCIC — and how many opinions attendees had about them.
Most attendees felt strongly about which platforms they loved and which ones they hated. While heavyweights like Sitecore, Drupal, Optimizely, and ScorpionCMS were fixtures of the conversation, the primary takeaway is that having a good CMS experience is critical, but can be challenging to achieve.
Our take: Take the time to get your CMS right. Choosing a lackluster CMS or underwhelming implementation partner could lock you into a multi-year headache. Many attendees we spoke to are still extremely cost-conscious in the wake of COVID, so they expect a major investment like a CMS to last at least five years. We always suggest setting a budget, mapping an ideal content architecture, and inventorying key features, then finding the right CMS to meet all those needs.
Let’s Continue the Conversation
The thing we’ll remember most about HCIC is the connection. As challenging as healthcare can be, it also brings people together: patients, providers, and, yes, even healthcare marketers. The five topics HCIC honed in on are important, but they’re just a snapshot of the many conversations healthcare teams are having about marketing, technology, and the patient experience.
Our hope is that the conversation will continue until the next HCIC and beyond. If you’re a healthcare marketer, what else is on your mind? We’d love to talk about it.
There’s a new acronym on the block: MACH (pronounced “mock”) architecture.
But like X is to Twitter, MACH is more a rebrand than a reinvention. In fact, you’re probably already familiar with the M, A, C, and H and may even use them across your digital properties. While we’ve been helping our clients implement aspects of MACH architecture for years, organizations like the MACH Alliance have recently formed in an attempt to provide clearer definition around the approach, as well as to align their service offerings with the technologies at hand.
One thing we’ve learned at Oomph after years of working with these technologies? It isn’t an all-or-nothing proposition. There are many degrees of MACH adoption, and how far you go depends on your organization and its unique needs.
But first, you need to know what MACH architecture is, why it’s great (and when it’s not), and how to get started.
What Is MACH?
MACH is an approach to designing, building, and testing agile digital systems — particularly websites. It stands for microservices, APIs, cloud-native, and headless.
Like a composable business, MACH unites a few tried-and-true components into a single, seamless framework for building modern digital systems.
The components of MACH architecture are:
- Microservices: Many online features and functions can be separated into more specific tasks, or microservices. Modern web apps often rely on specialized vendors to offer individual services, like sending emails, authenticating users, or completing transactions, rather than a single provider to rule them all.
- APIs: Microservices interact with a website through APIs, or application programming interfaces. This allows developers to change the site’s architecture without impacting the applications that use APIs and easily offer those APIs to their customers.
- Cloud-Native: A cloud-based environment hosts websites and applications via the Internet, ensuring scalability and performance. Modern cloud technology like Kubernetes, containers, and virtual machines keep applications consistent while meeting the demands of your users.
- Headless: Modern Javascript frameworks like Next.js and Gatsby empower intuitive front ends that can be coupled with a variety of back-end content management systems, like Drupal and WordPress. This gives administrators the authoring power they want without impacting end users’ experience.
Are You Already MACHing?

Even if the term MACH is new to you, chances are good that you’re already doing some version of it. Here are some telltale signs:
- You have one vendor for single sign-on (SSO), one vendor to capture payment information, another to handle email payment confirmations, and so on.
- You use APIs to integrate with tech solutions like Hubspot, Salesforce, PayPal, and more.
- Your website — or any website feature or application — is deployed within a cloud environment.
- Your website’s front end is managed by a different vendor than its back end.
If you’re doing any of the above, you’re MACHing. But the magic of MACH is in bringing them all together, and there are plenty of reasons why companies are taking the leap.
5 Benefits of MACH Architecture
If you make the transition to MACH, you can expect:
- Choice: Organizations that use MACH don’t have to settle for one provider that’s “good enough” for the countless services websites need. Instead, they can choose the best vendor for the job. For example, when Oomph worked with One Percent for America to build a platform offering low-interest loans to immigrants pursuing citizenship, that meant leveraging the Salesforce CRM for loan approvals, while choosing “Click and Pledge” for donations and credit card transactions.
- Flexibility: MACH architecture’s modular nature allows you to select and integrate individual components more easily and seamlessly update or replace those components. Our client Leica, for example, was able to update its order fulfillment application with minimal impact to the rest of its Drupal site.
- Performance: Headless applications often run faster and are easier to test, so you can deploy knowing you’ve created an optimal user experience. For example, we used a decoupled architecture for our client Wingspans to create a stable, flexible, and scalable site with lightning-fast performance for its audience of young career-seekers.
- Security: Breaches are generally limited to individual features or components, keeping your entire system more secure.
- Future-Proofing: A MACH system scales easily because each service is individually configured, making it easier to keep up with technologies and trends and avoid becoming out-of-date.
5 Drawbacks of MACH Architecture
As beneficial as MACH architecture can be, making the switch isn’t always smooth sailing. Before deciding to adopt MACH, consider these potential pitfalls.
- Complexity: With MACH architecture, you’ll have more vendors — sometimes a lot more — than if you run everything on one enterprise system. That’s more relationships to manage and more training needed for your employees, which can complicate development, testing, deployment, and overall system understanding.
- Challenges With Data Parity: Following data and transactions across multiple microservices can be tricky. You may encounter synchronization issues as you get your system dialed in, which can frustrate your customers and the team maintaining your website.
- Security: You read that right — security is a potential pro and a con with MACH, depending on your risk tolerance. While your whole site is less likely to go down with MACH, working with more vendors leaves you more vulnerable to breaches for specific services.
- Technological Mishaps: As you explore new solutions for specific services, you’ll often start to use newer and less proven technologies. While some solutions will be a home run, you may also have a few misses.
- Complicated Pricing: Instead of paying one price tag for an enterprise system, MACH means buying multiple subscriptions that can fluctuate more in price. This, coupled with the increased overhead of operating a MACH-based website, can burden your budget.
Is MACH Architecture Right for You?
In our experience, most brands could benefit from at least a little bit of MACH. Some of our clients are taking a MACH-lite approach with a few services or apps, while others have adopted a more comprehensive MACH architecture.
Whether MACH is the right move for you depends on your:
- Platform Size and Complexity: Smaller brands with tight budgets and simple websites may not need a full-on MACH approach. But if you’re managing content across multiple sites and apps, managing a high volume of communications and transactions, and need to iterate quickly to keep up with rapid growth, MACH is often the way to go.
- Level of Security: If you’re in a highly regulated industry and need things locked down, you may be better off with a single enterprise system than a multi-vendor MACH solution.
- ROI Needs: If it’s time to replace your system anyway, or you’re struggling with internal costs and the diminishing value of your current setup, it may be time to consider MACH.
- Organizational Structure: If different teams are responsible for distinct business functions, MACH may be a good fit.
How To Implement MACH Architecture
If any of the above scenarios apply to your organization, you’re probably anxious to give MACH a go. But a solid MACH architecture doesn’t happen overnight. We recommend starting with a technology audit: a systematic, data-driven review of your current system and its limitations.
We recently partnered with career platform Wingspans to modernize its website. Below is an example of the audit and the output: a seamless and responsive MACH architecture.
The Audit
- Surveys/Questionnaires: We started with some simple questions about Wingspan’s website, including what was working, what wasn’t, and the team’s reasons for updating. They shared that they wanted to offer their users a more modern experience.
- Stakeholder Interviews: We used insights from the surveys to spark more in-depth discussions with team members close to the website. Through conversation, we uncovered that website performance and speed were their users’ primary pain points.
- Systems Access and Audit: Then, we took a peek under the hood. Wingspans had already shared its poor experiences with previous vendors and applications, so we wanted to uncover simpler ways to improve site speed and performance.
- Organizational Structure: Understanding how the organization functions helps design a system to meet those needs. The Wingspans team was excited about modern technology and relatively savvy, but they also needed a system that could accommodate thousands of authenticated community members.
- Marketing Plan Review: We also wanted to understand how Wingspans would talk about their website. They sought an “app-like” experience with super-fast search, which gave us insight into how their MACH system needed to function.
- Roadmap: Wingspans had a rapid go-to-market timeline. We simplified our typical roadmap to meet that goal, knowing that MACH architecture would be easy to update down the road.
- Delivery: We recommended Wingspans deploy as a headless site (a site we later developed for them), with documentation we could hand off to their design partner.
The Output
We later deployed Wingspans.com as a headless site using the following components of MACH architecture:
- Microservices: Wingspans leverages microservices like Algolia Search for site search, Amazon AWS for email sends and static site hosting, and Stripe for managing transactions.
- APIs: Wingspans.com communicates with the above microservices through simple APIs.
- Cloud-Native: The new website uses cloud-computing services like Google Firebase, which supports user authentication and data storage.
- Headless: Gatsby powers the front-end design, while Cosmic JS is the back-end content management system (CMS).
Let’s Talk MACH
As MACH evolves, the conversation around it will, too. Wondering which components may revolutionize your site and which to skip (for now)? Get in touch to set up your own technology audit.
On the hunt for the right vendor to help with your website refresh or app launch? Creating a request for proposal (RFP) is often an essential – and even required – first step. But much like digital experiences themselves, RFPs can range widely in quality.
At their best, RFPs clearly educate potential partners about your needs and help you compare your choices more easily. At their worst, RFPs are vague, complicated, and time-consuming for everyone involved. That can prompt some vendors to bypass them completely, leaving you with a less-than-stellar pool of options.
Many agencies see RFPs as a high-risk, low-reward business development strategy and are selective about responding, since they can eat up so much time. Case in point: The average company spends 32 hours and has 9 team members work on each RFP, yet wins less than half of them.
Despite all this, RFPs aren’t going anywhere. So how can you create an RFP that will actually attract the type of partner you want?
At Oomph, we review hundreds of RFPs every year to find the projects that are best suited for our skills. After sorting through the good, bad, and truly ugly, we’ve established an internal scoring system for potential RFPs — and learned some valuable lessons along the way.
Here are nine key factors that can help ensure your RFP stands out from the rest.
1. Embrace open communication.
By establishing open lines of communication from the outset, you can build a sense of trust and clarify questions to ensure the proposed solutions meet your needs.
If holding calls with individual vendors isn’t an option, hosting a pre-bid call is one effective way to gain face time with several prospective partners at once. Connecting live can give you a sense of how your two teams will mesh. For example, if an agency flakes on the call, those issues will likely only get worse during the project itself. On the flip side, a vendor who gets your goals and needs can often give you a more customized and accurate estimate.
2. Be as transparent as possible with your budget.
Ah, the million-dollar question: How much will this all cost? Some organizations decline to share a budget in their RFP, either because they’re not allowed to or because they don’t want vendors to inflate their price to match the stated budget. But omitting a dollar figure can quickly lead to frustration on all sides: You don’t want to waste time sorting through responses that aren’t in your budget, and agencies don’t want to respond to potential clients who can’t meet their rates.
By providing a targeted cost, you build trust with potential partners and avoid wasting time on solutions that are out of your price range. When including a budget, be clear on how vendors should respond. Do they need to list every expense as a line item or can they group costs? Should they include additional items that they think could enhance the project?
If you’re in an industry where you can’t share a budget, consider at least including a not-to-exceed figure. Otherwise, be prepared to sift through huge swings in costs. This is one instance where getting specific about your desired solution can actually be a good thing. Noting that you’re looking for a templated website vs. a custom build, for example, can help you avoid getting some proposals that come in at $20,000 and others that come in at $200,000.
3. Give ample time during the process.
RFPs are a lot of work and you don’t want to rush. A hasty process can increase the likelihood of mistakes, omissions, or incomplete responses from potential partners.
If you’re accepting questions on your RFP, make sure you leave enough time after answering them for agencies to formulate their response. If you have a second round, create some breathing room for agencies to prepare, especially if you’re expecting a presentation.
4. Provide the basics on your company.
Vendors want to know who you are and what you’re about. This includes basic details like the products or services you offer, your location, and your audience.
You should also include details on what makes your organization different. What sets you apart? What’s your mission? This will help vendors better understand your company’s goals, allowing them to tailor their proposals to your specific request.
Finally, let vendors know who will be spearheading the project on your team. Are there multiple decision-makers? Will your board need to sign off? Sharing information on your working style can help attract vendors who are a good fit and ensure they plan for the right level of collaboration in their scope.
5. Focus on your project goals, not the solution.
When creating an RFP, it’s easy to get caught up in the specific deliverable you think you need. But try to think big picture.
What do you want to accomplish? What was the impetus behind this work? For example, if your online leads are slowing down or it’s been ages since you last refreshed your design, share the details in your RFP. Make sure to include any project constraints as well, like if you want the winning firm to use your existing technical setup or if you’re open to new solutions.
By focusing on challenges and goals vs. proscriptive solutions, you allow potential partners to propose ideas that you may not have considered — but could be more effective than your initial solution.
6. Let applicants know which response formats are (and aren’t) OK.
List out the required elements you want to see in a proposal, like solution overview, a proposed timeline, and relevant work samples. Providing a standard framework can make it easier for agencies to assess the effort involved before deciding whether to respond and help you compare the strengths and weaknesses of various approaches. If any items are high-priority, be clear about where you expect applicants to spend the most time.
While providing details on what you’d like to see in the proposal is a smart move, be flexible if possible on how agencies deliver their response. If your project involves design work, allowing agencies to submit a PowerPoint deck instead of a written response can give you a glimpse at their design skills and how they interpret your brand based on the RFP. If you need proposals submitted in a specific format, go digital if possible. Most agencies will click “Pass” on any RFP that requires submitting 10 printed copies of a 30-page response.
7. Be clear on what will set applicants apart.
Think about what would make your partner a perfect fit for your organization. Is it experience in your industry or working with your preferred CMS? Is hiring a woman- or BIPOC-owned firm important to you? Are you eager to find a local agency that you can collaborate with in person?
By explicitly stating what will set top-tier candidates apart, you not only motivate vendors to put their best foot forward, but also give them the guidance they need to do so. Providing specific evaluation criteria in your RFP can also help ensure that the vendors who respond are the ones best suited to your project’s needs.
8. Consider your invites carefully.
The RFP process is meant to help you choose a single partner to meet your needs. Finding your ideal match requires carefully considering their expertise, proposed solution, and alignment with your company’s culture and values. So when you send your RFP, aim for quality over quantity in responses. Reviewing proposals from vendors who lack the necessary skills or who are a poor fit can lead to wasted time and, ultimately, a less successful project.
Beyond posting your RFP across your channels, think about how to proactively find the best partner for the job. Doing research in your industry and even asking competitors or affiliates who they’ve worked with can help narrow down your search.
9. Hold off on those references.
We get it – it’s helpful to get a second (or third, or fourth…) opinion when choosing a partner. However, it’s best to wait until you’ve narrowed it down to a few potential partners before reaching out to their references.
Why? You don’t want to waste your time contacting references for vendors who may not end up being a good fit for your project. Some vendors also may not want their clients contacted over and over again for early-stage RFPs. By waiting until you’ve narrowed down your list, you’ll likely have better, more specific questions to ask the references based on the vendor’s proposed solution.
Creating a Win-Win RFP Process
With the help of a well-crafted RFP, you can attract top-tier vendors who will be eager to flex their creative muscles and propose solutions that achieve your project’s goals. By prioritizing transparency, setting clear expectations, and valuing communication, you can establish a strong foundation for a productive and successful collaboration.
Need a fresh perspective on your digital project RFP? We’d love to talk about it.