Compliance with the California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA), is a mandatory legal obligation for covered businesses, with significantly increased financial and operational risks starting in 2025.

The Critical Risk: Escalating Fines and Penalties

As of January 1, 2025, the California Privacy Protection Agency (CPPA) increased monetary thresholds and fines to align with the Consumer Price Index.

Key Deadlines and New Requirements (2026–2028)

Regulators have moved from a passive to an active enforcement model, removing the mandatory “grace period” for fixing violations before penalties are applied.

Does This Apply to My Business?

A for-profit business must comply if it does business in California and meets any of the following:

Operational Impact of Non-Compliance

Beyond fines, non-compliance can lead to court-ordered injunctions, mandatory regular audits, and the required deletion of valuable data assets. It also risks significant reputational damage and customer churn, as modern consumers increasingly prioritize data security when choosing where to spend.

Is your website ready for California’s evolving privacy standards? Non-compliance isn’t just a legal risk — it’s a business one that can result in millions in fines, mandatory audits, and lasting reputational damage. Our team helps organizations like yours navigate complex regulatory requirements with confidence, so you can focus on what matters most. Talk to our team today.

As a Project Manager at a digital agency, you can bet I’m paying close attention to the AI boom. It’s reshaping the digital landscape and redefining what it means to manage projects in this industry. My inbox is filled up with webinars, newsletters, and think pieces about the latest tools, and I’m constantly evaluating how AI can help me work smarter, whether that means improving client outcomes, streamlining internal processes, or making my own day-to-day more efficient.

For me, that means looking at the routine responsibilities that quietly consume the most time such as reviewing Jira backlogs and creating tickets, monitoring budgets and tracking overages, updating GANTT timelines, and cleaning up and sending out meeting notes. These are the areas where I’m eager to see AI meaningfully lighten the load so I can spend more time focusing on strategic conversations.

And the AI potential really is exciting, even if the tools sometimes miss the mark. What’s increasingly clear is that AI is reshaping not only the role of a Project Manager but also how I approach my work day to day. My goal is to share a snapshot of my takeaways so you can learn from my early experiments as you dive deeper into the evolving world of AI and project management.

What’s Already Working for Me

Semantic Search and Generative AI

ChatGPT has become my daily sidekick. I subscribe to the Plus plan and it’s worth every penny. I use it to clean up communications and ensure my messages are clear and professional. It’s a lifesaver for writing outlines for blogs, RFPs, web copy and presentations, and for analyzing documents for consistency in voice and tone.

It’s also helped me build tools I never thought I could. Using ChatGPT, I wrote a script that scrapes the web for industry articles and automatically emails my team a weekly digest of relevant news; all custom-built, fully automated, and free to run. Not bad for a non-developer!

Meeting Recording and Transcription

Fathom AI Notetaker has been my game-changer of the year. It lets me focus on conversations instead of scrambling to take notes. After each meeting, I receive an email summary with key points, assignees, and next steps.

What used to take me an hour to clean up and send out the meeting recap now takes 15 minutes.

Plus, I can revisit recordings anytime for a refresher.

Tools That Are Almost There

Jira

At Oomph, we use Jira for task and project management. AI integrations in Jira are rolling out fast, and while some features aren’t fully baked yet, the potential is huge. For example, the “Set to Recur” automation currently only goes two weeks out (most of my recurring tickets are monthly), but I can see where this is headed.

Jira Automation is already a PM’s dream, though formatting rules still require HTML. I’d also love a more flexible timeline and built in reporting, such as deeper metrics on time analysis, recurring task insights, or reports that identify duplicate efforts and estimate accuracy trends. More intelligent AI-driven reporting is coming soon, and it promises to make project analysis even sharper. I’m looking forward to seeing how these enhancements unfold and impact my efficiency.

Zapier

Although I’m not a habitual Zapier user, I have explored it for cross tool integrations at Oomph. Zapier’s interface is intuitive and its library of integrations keeps expanding. Most of them work beautifully, except, in my case, the Bamboo-to-Slack connection. Even with documentation and ChatGPT’s help, I needed to loop in a developer. Still, when it works, it really works.

Design Tools

As a Project Manager I’m not regularly using design tools. I leave that to the experts. However, part of my role is to be a servant leader, which means ensuring that my team members are also elevating their skills and learning cutting edge AI design tools. I had the opportunity to pilot a few of these design tools with a designer colleague and it was quite eye opening. 

Tools like Lovable.ai and Adobe XD are poised to revolutionize design discovery and prototyping. Design tools can now spin up mood boards, style tiles, and even code-ready layouts in minutes. The caveat? AI outputs often look a bit homogenized and don’t always come to the best solutions for business’s and user’s design problems. That’s where agencies need to shine, adding the human creativity, nuance, and storytelling that make a brand truly stand out.

As a bonus, I now have a few design tools I can rely on whenever I need to create a quick visual for my team.

Tools That Still Have Some Catching Up to Do

Presentation Decks

I was really hopeful here. While AI can generate strong outlines and talking points, its design and layout capabilities still lag behind. So far, I’ve tried ChatGPT, Gemini, and Google Slides, all free versions, and none have hit the mark. A colleague recently recommended that I check out Gamma, which is now on my list to explore. But based on my personal experience, I’m guessing paid tools perform better. For now this is still an area where a human touch makes all the difference. Plus, actually writing the content and building the deck myself helps me internalize the material and prepares me far better for the presentation. Relying too heavily on AI risks creating distance between me and the content, and that can show in the room.

Here’s where I’m getting my AI fix:

With AI news coming at us from every direction, I keep a short list of reliable newsletters that help me stay sharp without getting overwhelmed:

The Road Ahead

I know I’ve only scratched the surface. There are countless AI tools I haven’t discovered yet, and new ones launching daily. I’m trying to pace myself, focusing on tools that fit naturally into our existing tech stack and genuinely save time. Plus, subscription fatigue is real; not every shiny new app is worth the monthly fee.

Eventually, I expect we’ll see consolidation. With so many tools flooding the market, an “AI implosion” feels inevitable, where smaller platforms merge or get absorbed by bigger players. (Case in point: Google Calendar’s new Booking Pages feature gives Calendly a run for its money.)

As AI continues to evolve, so will the role of the Project Manager. Those who embrace experimentation and balance efficiency with human insight will be the ones leading the next wave of digital transformation. Stay curious and keep experimenting.

Selecting a content management system in healthcare is no longer a purely technical decision. In today’s environment, a CMS directly impacts compliance, accessibility, speed to publish, and ultimately, trust. Healthcare organizations are under growing pressure to deliver accurate, timely information across multiple digital channels, while meeting strict regulatory and accessibility requirements. The CMS at the center of that effort needs to support far more than page updates.

Why Healthcare CMS Decisions Are Uniquely Complex

Healthcare websites serve a wide range of audiences, from patients and caregivers to providers, partners, and regulators. Content must be clear, accurate, and easy to update—often by multiple teams—without introducing risk.

At the same time, healthcare organizations face constraints that many other industries don’t. Accessibility standards, privacy expectations, and governance requirements are non-negotiable.

A CMS that lacks flexibility or control quickly becomes a bottleneck.

“The healthcare content management system market is projected to grow to over $61 billion by 2031, underscoring how healthcare organizations are prioritizing modern, scalable digital platforms to support compliance, multi-channel delivery, and governance.”

According to Mordor Intelligence

What Healthcare Teams Should Prioritize

Flexibility Without Compromising Security

Healthcare organizations often rely on complex digital ecosystems, including EHRs, portals, analytics tools, and consent platforms. A modern CMS should integrate cleanly with these systems rather than trying to replace them.

Flexibility matters, but not at the expense of security. The right CMS supports modular integration while keeping sensitive data protected and clearly separated from content operations.

Planning For Change, Not Just Launch

CMS selection shouldn’t be based solely on current needs. Healthcare regulations, digital expectations, and technologies continue to evolve. The most effective platforms are designed to adapt without requiring frequent replatforming.

This means supporting incremental improvements, phased rollouts, and long-term scalability—so teams can modernize at a pace that aligns with organizational priorities.

The Role Of Modern, Composable CMS Platforms

Composable CMS platforms are gaining traction in healthcare because they treat content as structured data rather than static pages. This approach supports reuse, consistency, and omnichannel delivery while maintaining governance.

For healthcare teams, this translates into faster publishing, fewer bottlenecks, and greater confidence in content accuracy without sacrificing compliance.

What This Means For Healthcare Teams

Healthcare CMS selection is about more than choosing a tool. It’s about enabling teams to communicate clearly, operate efficiently, and adapt responsibly in a complex digital landscape.

Organizations that prioritize governance, accessibility, and flexibility position themselves to deliver trusted digital experiences today and in the years ahead.

Ready to Evaluate Your Healthcare CMS? Our team helps healthcare organizations navigate complex CMS decisions with a focus on governance, accessibility, and long-term scalability. Let’s talk about what the right platform looks like for your organization.

Contentful is no longer just an alternative CMS—it’s become a foundational platform for organizations navigating complexity, regulation, and rapid digital change. In 2026, the question isn’t what is Contentful? It’s why are so many organizations rebuilding their digital ecosystems around it? The answer lies in how digital experiences are built, managed, and scaled today.

Contentful Is Built for Systems, Not Pages

Traditional CMS platforms were designed around pages and templates. That model breaks down when content needs to move faster, live in more places, and remain consistent across teams and channels.

Contentful takes a different approach. It treats content as structured data, not static pages. That means teams create content once and deliver it anywhere—websites, apps, portals, email, or future channels that don’t yet exist.

In 2026, this isn’t a “nice to have.” It’s how modern digital platforms operate.

Composable Architecture Is Now the Default

Composable architecture has moved from trend to standard. Organizations want the freedom to choose best-in-class tools without being locked into monolithic platforms.

Contentful fits cleanly into this model. It integrates with design systems, analytics platforms, personalization tools, consent managers, and AI services through APIs—without forcing teams into rigid workflows.

This flexibility allows organizations to evolve their stack over time instead of rebuilding every few years.

AI Depends on Structured Content

AI-driven experiences are only as good as the content behind them. In 2026, organizations are using AI to support personalization, search, localization, content optimization, and automation.

Contentful’s structured content model makes this possible. Clean, well-defined content enables AI tools to understand, reuse, and adapt content accurately—without introducing risk or inconsistency.

For teams exploring AI responsibly, Contentful provides the infrastructure needed to scale with confidence.

Governance and Compliance Are Built In, Not Bolted On

For regulated and mission-driven organizations, governance isn’t optional. Publishing controls, audit trails, permissions, and review workflows are essential.

Contentful supports these needs at scale. Teams can define roles, control who edits or publishes content, and maintain visibility into changes across environments. This level of governance is critical in industries like healthcare, legal, finance, and the public sector.

In 2026, compliance isn’t something teams add later—it’s designed into the platform from day one.

Marketing and Development Work Better Together

One of Contentful’s biggest advantages is how it aligns marketing and engineering teams. Developers maintain design systems and integrations. Content teams manage content without breaking layouts or workflows.

This separation of concerns reduces friction, speeds up delivery, and minimizes production errors—especially as digital ecosystems grow more complex.

Ready to explore what Contentful could do for your organization? Whether you’re evaluating platforms, planning a migration, or looking to optimize your current setup, Oomph can help you build a content infrastructure designed for the long term. Let’s talk about your next move.

Why Organizations Move to Contentful Now

Organizations typically migrate to Contentful when legacy systems start holding them back. Common triggers include:

In 2026, Contentful isn’t chosen because it’s new. It’s chosen because it’s resilient.

For organizations new to the platform, getting started doesn’t have to mean a complete rebuild. Oomph’s Contentful Kickstart Package helps teams move from decision to deployment with a structured, low-risk approach—giving you the foundation to scale as your needs evolve.

The Takeaway

Contentful has evolved alongside the modern digital landscape. It’s not just a CMS—it’s a content platform designed for scale, governance, and change.

For organizations planning beyond their next website launch and toward long-term digital maturity, Contentful provides the flexibility and confidence needed to move forward.

Ready to explore what Contentful could do for your organization? Whether you’re evaluating platforms, planning a migration, or looking to optimize your current setup, Oomph can help you build a content infrastructure designed for the long term. Let’s talk about your next move.

Cookie consent has become a standard part of the modern web experience. What once felt like a small technical detail is now central to how organizations handle privacy, compliance, and user trust online.

Why It Matters:

If your website uses analytics, marketing tools, or third-party integrations, cookie consent isn’t optional. It’s a foundational requirement of operating a responsible digital presence.

Cookie consent refers to the practice of informing website visitors about how cookies are used and giving them the ability to control that choice. Rather than assuming permission, organizations are expected to be transparent about what data is collected, why it’s collected, and how it’s used. Visitors must be able to opt in, opt out, or manage their preferences in a clear and accessible way.

The rise of cookie consent is directly tied to privacy regulations like GDPR and CCPA. These laws were designed to shift control back to individuals, especially as data collection has become more sophisticated. Cookies, particularly those used for tracking behavior, measuring performance, or enabling personalization fall squarely within that scope.

Not all cookies serve the same purpose. Some are essential for basic site functionality, while others support analytics, advertising, or embedded third-party services. Modern consent approaches recognize this difference, allowing users to make informed decisions rather than forcing an all-or-nothing choice.

How cookie consent is implemented matters just as much as having it in place. Poorly designed consent experiences, confusing banners, vague language, or limited options can frustrate users and undermine trust. Thoughtful implementations do the opposite. They integrate naturally into the site experience, communicate clearly, and respect user choice without disrupting usability.

Your content management system plays an important role here. Cookie consent doesn’t exist in isolation; it must work alongside your CMS, analytics tools, and marketing stack. A modern CMS makes it easier to manage scripts, control how and when tracking tools load, and update privacy messaging as regulations evolve. Without that flexibility, maintaining compliance becomes difficult and error-prone.

The risks of getting cookie consent wrong extend beyond potential fines. In an environment where users are increasingly aware of how their data is handled, missteps can damage credibility. For organizations in regulated or mission-driven sectors, that erosion of trust can have real consequences.

Cookie consent is no longer a checkbox or a banner added at the end of a build. It’s a core component of modern digital governance.

Organizations that treat consent as part of their broader content and platform strategy are better equipped to stay compliant, adapt to change, and deliver digital experiences users can trust.

Need help implementing cookie consent the right way? Whether you’re navigating GDPR requirements, evaluating your current setup, or planning a website redesign, Oomph can help you build a privacy-forward digital experience that protects users and keeps you compliant. Get in touch to talk through your needs.

For many organizations, privacy regulations like GDPR and CCPA seem like distant legal concerns rather than operational priorities. In practice, however, websites serve as the primary point of data collection—making compliance far more relevant than most teams assume. If your site collects user data in any form, privacy compliance isn’t optional.

Understanding When GDPR and CCPA Apply

GDPR governs the collection of personal data from users in the European Union, while CCPA applies to personal data collected from California residents.

Crucially, these regulations are triggered by user location, not company headquarters. A U.S.-based organization serving a global audience may be subject to both frameworks.

Why Websites Are at the Center of Compliance

Most modern websites collect data through multiple channels:

Each of these collection points creates compliance obligations around consent, transparency, and user control.

Moving Beyond Cookie Banners

Meaningful compliance extends well beyond footer disclaimers. Effective privacy management requires:

Legacy CMS platforms frequently lack the flexibility and governance capabilities needed to meet these requirements.

The Role of Your CMS in Privacy Compliance

Your content management system is instrumental in supporting privacy obligations. A modern, composable CMS enables organizations to:

For regulated and mission-driven organizations, CMS limitations can translate directly into compliance vulnerabilities.

The Cost of Non-Compliance

While regulatory penalties are a concern, the greater risk lies in eroding user trust.

Today’s users expect transparency and control over their personal information. Organizations unable to deliver on these expectations risk damaging their reputation with customers, donors, and partners.

Final Thoughts

GDPR and CCPA represent more than legal obligations—they present fundamental digital experience challenges. Websites built on flexible, compliance-ready platforms are better positioned to adapt as privacy expectations continue to evolve.

In today’s environment, privacy compliance shouldn’t be viewed as a constraint. It’s an essential component of delivering a modern, trustworthy digital experience.

Need help ensuring your website meets modern privacy standards? Our team specializes in building compliance-ready digital platforms that protect your users and your organization. Let’s discuss your requirements.

A website is the cornerstone of your brand’s digital presence. It communicates who you are, what you offer, and why customers should trust you. In today’s digital-first marketplace, your website is often the first, and sometimes only, impression a prospective customer will have of your business. That makes maintaining it not just a technical task, but a strategic business priority.

Owning a website is a long-term investment. It reflects your brand, reputation, values, and offerings, and it directly influences key performance indicators (KPIs) such as lead generation, conversions, and customer engagement. Consider how much time and budget go into designing and building a website. Once the site goes live, the work doesn’t end there; ongoing maintenance is critical to ensure it continues to run optimally.

The risks of neglecting website maintenance are extensive. Common issues include:

Security Vulnerabilities

Website Downtime or Broken Functionality

Slow Performance

SEO Ranking Loss

Incompatibility with New Browsers & Devices

Poor Analytics & Marketing Integration

Higher Long-Term Costs

Brand Reputation & Trust

The ROI of Regular Website Maintenance

Proper maintenance is a business-critical investment with measurable ROI. Regular updates and monitoring strengthen security, preserve performance, ensure compliance with accessibility standards, and protect user experience. With a clear process in place, maintenance safeguards your digital presence, reduces costs, and supports outcomes such as improved lead generation, e-commerce revenue, and stronger brand trust.

Here’s a breakdown of the ROI across multiple dimensions:

Why Start Early

It’s important to begin discussing website maintenance with your agency during the planning stages of a new site, as it can influence technical decisions and long-term requirements. Maintenance packages vary depending on your team’s resources, and a trusted agency partner can help define core tasks, expectations, and responsibilities. For organizations with tighter budgets, we’ve also seen success with automated solutions that handle routine updates.

Maintenance as Business Insurance

Website maintenance is more than applying updates—it’s business insurance.

Organizations need to uphold security, performance, accessibility, SEO, GDPR compliance, and other standards that directly affect user experience and, in many cases, legal obligations. Working with Oomph ensures these processes are streamlined, proactive, and aligned with your business goals. If you’re looking to protect your digital investment, let’s explore a maintenance approach tailored to your team’s needs. Learn more about our maintenance services.

When you’re responsible for your organization’s digital presence, it’s natural to focus on what’s visible: the design, the content, the user experience. But beneath every modern website lies a complex ecosystem of technologies, integrations, and workflows that can either accelerate your team’s success or create hidden friction that slows everything down.

That’s where a technical audit becomes invaluable. It’s not just a diagnostic tool—it’s a strategic opportunity to understand the foundation of your platform and make informed decisions about your digital future.

It’s Like a Home Inspection for Your Website

Think about buying a house. You walk through focusing on the big picture—does the kitchen work for your family? Is there enough space? But a good home inspector looks deeper, checking the foundation, examining the electrical system, and spotting that small leak under the bathroom sink that could become a major problem later.

A technical audit takes the same comprehensive approach to your digital platform. We examine not just what’s working today, but what might impact your team’s ability to execute tomorrow. The goal isn’t to find problems for the sake of finding them—it’s to give you the complete picture you need to plan strategically.

Creating Shared Understanding Across Your Entire Team

One of the most powerful outcomes of a technical audit is alignment. Whether you’re managing internal developers, partnering with an agency, or preparing to issue an RFP, having a clear baseline allows everyone to ask better questions and make more accurate decisions.

A strategic technical audit delivers:

Proactive Problem-Solving: Surface technical issues before they become roadblocks to important campaigns or launches.

Performance Optimization: Identify specific improvements that will measurably enhance user experience and conversion rates.

Workflow Enhancement: Reveal friction points that slow down content updates, campaign launches, or day-to-day management tasks.

Vendor Enablement: Provide partners and potential vendors with the context they need to scope work accurately and ask intelligent questions.

Strategic Planning: Create a foundation for long-term digital strategy decisions, from infrastructure investments to editorial tooling.

The organizations we work with often tell us that a technical audit helped them transition from reactive maintenance to proactive digital platform management—a shift that pays dividends across every initiative.

What We Typically Discover

While every platform is unique, certain patterns emerge across industries and organization types. Technical audits frequently reveal:

Security and Maintenance Opportunities: Outdated software, plugins requiring updates, or access configurations that can be strengthened with minimal effort. This often includes ensuring accessibility compliance meets current standards.

Performance Enhancements: Specific optimizations in areas like image compression, caching strategies, or database queries that directly impact user experience. Modern audits also examine search visibility and performance optimization.

Scalability Considerations: Code or architectural decisions that work fine today but could limit growth or flexibility as your needs evolve. This includes evaluating search infrastructure and international expansion capabilities.

Process Improvements: Gaps in version control, deployment workflows, or change management that create unnecessary risk or slow down development cycles.

Editorial Workflow Optimization: Content management processes that feel cumbersome or inconsistent, often because they evolved organically rather than being designed strategically. For global organizations, this includes reviewing translation and localization systems.

Many of these findings aren’t urgent fixes—they’re strategic insights that become incredibly valuable when you’re planning a redesign, launching a major campaign, or evaluating new partnerships.

When a Technical Audit Delivers Maximum Value

You don’t need to wait for problems to emerge. Technical audits are particularly valuable when:

Taking Over Digital Responsibility: You’ve inherited a platform and need a comprehensive understanding of what you’re working with and where the opportunities lie.

Planning Major Initiatives: Before investing in a redesign, platform migration, or significant feature development, understanding your current foundation prevents costly surprises.

Preparing for Vendor Selection: Whether you’re issuing an RFP or evaluating agencies, giving potential partners accurate technical context leads to better proposals and more realistic timelines.

Developing Digital Strategy: When you’re ready to create a roadmap for digital growth, grounding decisions in technical reality rather than assumptions leads to better outcomes. This is especially important when considering AI integration or generative engine optimization strategies.

Our Approach to Technical Audits

We design our audits to build clarity and confidence, not overwhelm you with technical jargon. Rather than simply delivering a report, we walk through findings with your team, prioritize recommendations based on your specific goals, and translate technical insights into actionable business language you can share with stakeholders.

Our methodology goes beyond code analysis. We examine how your platform supports your current workflows, aligns with your organizational objectives, and positions you for future growth. This combination of technical depth and strategic perspective ensures you get insights that drive real business outcomes.

The audit process focuses on partnership, not judgment.

We’re not looking for flaws to criticize—we’re identifying opportunities to help you and your partners make smarter decisions. The result is visibility into the hidden layers of your digital platform and a foundation for more strategic planning, better technology investments, and sustainable long-term success.

Ready to understand what’s really happening under the hood of your digital platform? Let’s talk about how a technical audit could support your goals and strengthen your team’s ability to execute on your digital vision.

If your Drupal site relies on Acquia Search leveraging Solr, you’re likely facing a migration from Acquia Search to SearchStax. We’ve guided numerous organizations through this transition and want to share our proven approach to help you navigate this change successfully.

Before diving into the migration process, this transition presents an excellent opportunity to reassess your search strategy entirely. While Solr remains a powerful and robust solution, the search landscape has evolved significantly with innovative alternatives now available. For organizations considering broader platform transitions, this moment offers strategic value beyond search improvements. Modern React-based solutions can deliver dramatically faster user experiences. Our recent work with ONS demonstrates this potential—by replacing their Solr solution with Algolia Instant Search, we helped them achieve a 40% improvement in search response times while creating a more intuitive experience for their members.

Why the Move to SearchStax?

Acquia announced earlier this year that they’re sunsetting their Acquia Search offering in 2026, positioning SearchStax as the recommended migration path through their new partnership. This transition offers enhanced search capabilities and more direct control over your search environment through SearchStax’s comprehensive dashboard, providing visibility into Solr server performance, data analysis tools, search preview functionality, and advanced configuration options.

The architectural similarity ensures a seamless end-user experience—Solr remains the foundation, requiring no front-end changes for this migration path while delivering improved administrative control.

Our Proven Migration Framework

Through multiple successful migrations, we’ve developed a structured approach that minimizes risk and ensures smooth transitions. Here’s our step-by-step framework:

Phase 1: Foundation Setup

Phase 2: Testing and Validation

Phase 3: Production Implementation

Phase 4: Configuration Management

Phase 5: Transition Management

Addressing Technical Challenges

Our experience across multiple migrations has revealed common technical hurdles that require proactive attention. Configuration issues with Boost by Date Processor settings, Highlighted Fields errors during index rebuilding, and Facet configuration mismatches between environments are frequent challenges. The key to success lies in early identification during lower environment testing and leveraging Acquia support resources to resolve issues before they impact production.

Each migration presents unique challenges based on your specific configuration and content structure. Our approach prioritizes thorough testing and validation to surface these issues early, ensuring smooth production deployment.

Strategic Search Optimization

Successful migration extends beyond technical implementation. Understanding your content architecture, user behavior patterns, and business objectives enables you to optimize search effectiveness during the transition. This migration provides an ideal opportunity to evaluate search performance metrics, refine content indexing strategies, and enhance user experience design.

By following this proven framework and preparing for potential challenges, your organization can successfully transition to SearchStax while improving both administrative capabilities and user search experience. The result is a more robust, manageable search solution that positions your site for future growth and enhanced user engagement.

Our comprehensive migration expertise extends beyond search implementations to complete platform transformations, ensuring your digital infrastructure supports your long-term strategic objectives.

Ready to begin your SearchStax migration? Don’t wait until the 2026 deadline creates a migration rush. Our fixed-price SearchStax migration service ($2,500) provides the structured, proven approach outlined in this guide—from foundation setup through transition management. Get started with your SearchStax migration today.

The tech industry has never been accused of moving slowly. The exponential explosion of AI tools in 2024, though, sets a new standard for fast-moving. The past few months of 2024 rewrote what happened in the past few years. If you have not been actively paying attention to AI, now is the time to start.

I have been intently watching the AI space for over a year. I started from a place of great skepticism, not willing to internalize the hype until I could see real results. I can now say with confidence that when applied to the correct problem with the right expectations, AI can make significant advancements possible no matter the industry.

In 2024, not only did the large language models get more powerful and extensible, but the tools are being created to solve real business problems. Because of this, skepticism about AI has shifted to cautious optimism. Spurred by the Fortune 500’s investments and early impacts, companies of every shape and size are starting to harness the power of AI for efficiency and productivity gains.

Let’s review what happened in Quarter Four of 2024 as a microcosm of the year in AI.

New Foundational Models in the AI Space

A foundational large language model (LLM) is one which other AI tools can be built from. The major foundational LLMs have been Chat GPT, Claude, Llama, and Gemini, operated by OpenAI & Microsoft, Anthropic, Meta, and Google respectively.

In 2024, additional key players entered the space to create their own foundational models. 

Amazon

Amazon has been pumping investments into Anthropic as their operations are huge consumers of AI to drive efficiency. With their own internal foundational LLM, they could remove the need to share their operational data with an external party. Further, like they did with their AWS business, they can monetize their own AI services with their own models. Amazon Nova was launched in early December.

xAI

In May of 2024, X secured funding to start to create and train its own foundational models. Founder Elon Musk was a co-founder of OpenAI. The company announced they would build the world’s largest supercomputer in June and it was operational by December.

Nvidia

In October, AI chip-maker Nvidia announced it own LLM named Nemotron to compete directly with OpenAI and Google — organizations that rely on its chips to train and power their own LLMs. 

Rumors of more to come

Apple Intelligence launched slowly in 2024 and uses OpenAI’s models. Industry insiders think it is natural to expect Apple to create its own LLM and position it as a privacy-first, on-device service. 

Foundational Model Advancements

While some companies are starting to create their own models, the major players have released advanced tools that can use a range of inputs to create a multitude of outputs: 

Multimodal Processing

AI models can now process and understand multiple types of data together, such as images, text, and audio. This allows for more complex interactions with AI tools. 

Google’s NotebookLM was a big hit this year for its ability to use a range of data as sources, from Google Docs to PDFs to web links for text, audio, and video. The tool essentially allows the creation of small, custom RAG databases to query and chat with.

Advanced Reasoning

OpenAI’s 01 reasoning model (pronounced “Oh One”) uses step-by-step “Chain of Thought” to solve complex problems, including math, coding, and scientific tasks. This has led to AI tools that can draw conclusions, make inferences, and form judgments based on information, logic, and experience. The queries take longer but are more accurate and provide more depth.

Google’s Deep Research is a similar product that was released to Gemini users in December.

Enhanced Voice Interaction

More and more AI tools can engage in natural and context-aware voice interactions — think Siri, but way more useful. This includes handling complex queries, understanding different tones and styles, and even mimicking personalities such as Santa Claus.

Vision Capabilities

AI can now “see” and interpret the world through cameras and visual data. This includes the ability to analyze images, identify objects, and understand visual information in real-time. Examples include Meta’s DINOv2, OpenAI’s GPT-4o, and Google’s PaliGemma

AI can also interact with screen displays on devices, allowing for a new level of awareness of sensory input. OpenAI’s desktop app for Mac and Windows is contextually aware of what apps are available and in focus. Microsoft’s Co-pilot Vision integrates with the Edge browser to analyze web pages as users browse. Google’s Project Mariner prototype allows Gemini to understand screen context and interact with applications.

While still early and fraught with security and privacy implications, the technology will lead to more advancements for “Agentic AI” which will continue to grow in 2025.

Agentic Capabilities

AI models are moving towards the ability to take actions on behalf of users. No longer confined to chat interfaces alone, these new “Agents” will perform tasks autonomously once trained and set in motion.

Note: Enterprise leader SalesForce launched AgentForce in September 2024. Despite the name, these are not autonomous Agents in the same sense. Custom agents must be trained by humans, given instructions, parameters, prompts, and success criteria. Right now, these agents are more like interns that need management and feedback.

Specialization

2024 also saw an increase in models designed for specific domains and tasks. With reinforcement fine-tuning, companies are creating tools for legal, healthcare, finance, stocks, and sports. 

Examples include Sierra, who offers a specifically trained customer service platform, and LinkedIn agents as hiring assistants.

What this all means for 2025

It’s clear that AI models and tools will continue to advance, and businesses that embrace AI will be in a better position to thrive. To be successful, businesses need an experimental mindset of continuous learning and adaptation: 

While the models will continue to get better into 2025, don’t wait to explore AI. Even if the existing models never improve, they are powerful enough to drive significant gains in business. Now is the time to implement AI in your business. Choose a model that makes sense and is low-friction — if you are an organization that uses Microsoft products, start with a trial of AI add-ons for office tools, for example. Start accumulating experience with the tools at hand, and then expand to include multiple models to evaluate more complex AI options that may have greater business impact. It almost doesn’t matter which you choose, as long as you get started.

Oomph has started to experiment with AI ourselves and Drupal has exciting announcements about integrating AI tools into the authoring experience. If you would like more information, please reach out for a chat.

The U.S. is one of the most linguistically diverse countries in the world. While English may be our official language, the number of people who speak a language other than English at home has actually tripled over the past three decades

Statistically speaking, the people you serve are probably among them. 

You might even know they are. Maybe you’ve noticed an uptick in inquiries from non-English speaking people or tracked demographic changes in your analytics. Either way, chances are good that organizations of all kinds will see more, not less, need for translation — especially those in highly regulated and far-reaching industries, like higher education and healthcare.

So, what do you do when translation becomes a top priority for your organization? Here, we explain how to get started.

3 Solutions for Translating Your Website

Many organizations have an a-ha moment when it comes to translations. For our client Lifespan, that moment came during its rebrand to Brown Health University and a growing audience of non-English speaking people. For another client, Visit California, that moment came when developing their marketing strategies for key global audiences. 

Or maybe you’re more like Leica Geosystems, a longtime Oomph client that prioritized translation from the start but needed the right technology to support it. 

Whenever the time comes, you have three main options: 

Manual translation and publishing

When most people think of translating, manual translation comes to mind. In this scenario, someone on your team or someone you hire translates content by hand and uploads the translation as a separate page to the content management system (CMS).

Translating manually will offer you higher quality and more direct control over the content. You’ll also be able to optimize translations for SEO; manual translation is one of the best ways to ensure the right pages are indexed and findable in every language you offer them. Manual translation also has fewer ongoing technical fees and long-term maintenance attached, especially if you use a CMS like Drupal which supports translations by default.

“Drupal comes multi-lingual out of the box, so it’s very easy for editors to publish translations of their site and metadata,” Oomph Senior UX Engineer Kyle Davis says. “Other platforms aren’t going to be as good at that.” 

While manual translation may sound like a winning formula, it can also come at a high cost, pushing it out of reach for smaller organizations or those who can’t allocate a large portion of their budget to translate their website and other materials. 

Integration with a real-time API

Ever seen a website with clickable international flags near the top of the page? That’s a translation API. These machine translation tools can translate content in the blink of an eye, helping users of many different languages access your site in their chosen language. 

“This is different than manual translation, because you aren’t optimizing your content in any way,” Oomph Senior UX Engineer John Cionci says. “You’re simply putting a widget on your page.” 

Despite their plug-and-play reputation, machine translation APIs can actually be fairly curated. Customization and localization options allow you to override certain phrases to make your translations appropriate for a native speaker. This functionality would serve you well if, like Visit California, you have a team to ensure the translation is just right. 

Though APIs are efficient, they also do not take SEO or user experience into account. You’re getting a direct real-time translation of your content, nothing more and nothing less. This might be enough if all you need is a default version of a page in a language other than English; by translating that page, you’re already making it more accessible. 

However, this won’t always cut it if your goal is to create more immersive, branded experiences — experiences your non-English-speaking audience deserves. Some translation API solutions also aren’t as easy to install and configure as they used to be. While the overall cost may be less than manual translation, you’ll also have an upfront development investment and ongoing maintenance to consider. 

Use Case: Visit California

Manual translation doesn’t have to be all or nothing. Visit California has international marketing teams in key markets skilled in their target audiences’ primary languages, enabling them to blend manual and machine translation. 

We worked with Visit California to implement machine translation (think Google Translate) to do the heavy lifting. After a translation is complete, their team comes in to verify that all translated content is accurate and represents their brand. Leveraging the glossary overrides feature of Google Cloud Translate V3, they can tailor the translations to their communication objectives for each region. In addition, their Drupal CMS still allows them to publish manual translations when needed. This hybrid approach has proven to be very effective.

Third-party translation services

The adage “You get what you pay for” rings true for translation services. While third-party translation services cost more than APIs, they also come with higher quality — an investment that can be well worth it for organizations with large non-English-speaking audiences.

Most translation services will provide you with custom code, cutting down on implementation time. While you’ll have little to no technical debt, you will have to keep on top of recurring subscription fees.

What does that get you? If you use a proxy-based solution like MotionPoint, you can expect to have content pulled from your live site, then freshly translated and populated on a unique domain. 

“Because you can serve up content in different languages with unique domains, you get multilingual results indexed on Google and can be discovered,” Oomph Senior Digital Project Manager Julie Elman says. 

Solutions like Ray Enterprise Translation, on the other hand, combine an API with human translation, making it easier to manage, override, moderate, and store translations all within your CMS. 

Use Case: Leica Geosystems

Leica’s Drupal e-commerce store is active in multiple countries and languages, making it difficult to manage ever-changing products, content, and prices. Oomph helped Leica migrate to a single-site model during their migration from Drupal 7 to 8 back in 2019. 

“Oomph has been integral in providing a translation solution that can accommodate content generation in all languages available on our website,” says Jeannie Records Boyle, Leica’s e-Commerce Translation Manager. 

This meant all content had one place to live and could be translated into all supported languages using the Ray Enterprise Translation integration (formerly Lingotek). Authors could then choose which countries the content should be available in, making it easier to author engaging and accurate content that resonates around the world.  

“Whether we spin up a new blog or product page in English or Japanese, for example, we can then translate it to the many other languages we offer, including German, Spanish, Norwegian Bokmål, Dutch, Brazil Portuguese, Italian, and French,” Records Boyle says.

Taking a Strategic Approach to Translation

Translation can be as simple as the click of a button. However, effective translation that supports your business goals is more complex. It requires that you understand who your target audiences are, the languages they speak, and how to structure that content in relation to the English content you already have. 

The other truth about translation is that there is no one-size-fits-all option. The “right” solution depends on your budget, in-house skills, CMS, and myriad other factors — all of which can be tricky to weigh. 

Here at Oomph, we’ve helped many clients make their way through website translation projects big and small. We’re all about facilitating translations that work for your organization, your content admins, and your audience — because we believe in making the Web as accessible as possible for all. 

Want to see a few recent examples or dive deeper into your own website translation project? Let’s talk

Oomph has been quiet about our excitement for artificial intelligence (A.I.). While the tech world has exploded with new A.I. products, offerings, and add-ons to existing product suites, we have been formulating an approach to recommend A.I.-related services to our clients. 

One of the biggest reasons why we have been quiet is the complexity and the fast-pace of change in the landscape. Giant companies have been trying A.I. with some loud public failures. The investment and venture capitalist community is hyped on A.I. but has recently become cautious as productivity and profit have not been boosted. It is a familiar boom-then-bust of attention that we have seen before — most recently with AR/VR after the Apple Vision Pro five months ago and previously with the Metaverse, Blockchain/NFTs, and Bitcoin. 

There are many reasons to be optimistic about applications for A.I. in business. And there continue to be many reasons to be cautious as well. Just like any digital tool, A.I. has pros and cons and Oomph has carefully evaluated each. We are sharing our internal thoughts in the hopes that your business can use the same criteria when considering a potential investment in A.I. 

Using A.I.: Not If, but How

Most digital tools now have some kind of A.I. or machine-learning built into them. A.I. has become ubiquitous and embedded in many systems we use every day. Given investor hype for companies that are leveraging A.I., more and more tools are likely to incorporate A.I.

This is not a new phenomenon. Grammarly has been around since 2015 and by many measures, it is an A.I. tool — it is trained on human written language to provide contextual corrections and suggestions for improvements.

Recently, though, embedded A.I. has exploded across markets. Many of the tools Oomph team members use every day have A.I. embedded in them, across sales, design, engineering, and project management — from Google Suite and Zoom to Github and Figma.

The market has already decided that business customers want access to time-saving A.I. tools. Some welcome these options, and others will use them reluctantly.

Either way, the question has very quickly moved from should our business use A.I. to how can our business use A.I. tools responsibly?

The Risks that A.I. Pose

Every technological breakthrough comes with risks. Some pundits (both for and against A.I. advancements) have likened its emergence to the Industrial Revolution of the early 20th century. And a high-level of positive significance is possible, while the cultural, societal, and environmental repercussions could also follow a similar trajectory.

A.I. has its downsides. When evaluating A.I. tools as a solution to our client’s problems, we keep this list of drawbacks and negative effects handy, so that we may review it and think about how to mitigate their negative effects:

We have also found that our company values are a lens through which we can evaluate new technology and any proposed solutions. Oomph has three cultural values that form the center of our approach and our mission, and we add our stated 1% For the Planet commitment to that list as well: 

For each of A.I.’s drawbacks, we use the lens of our cultural values to guide our approach to evaluating and mitigating those potential ill effects. 

A.I. is built upon biased and flawed data

At its core, A.I. is built upon terabytes of data and billions, if not trillions, of individual pieces of content. Training data for Large Language Models (LLMs) like Chat GPT, Llama, and Claude encompass mostly public content as well as special subscriptions through relationships with data providers like the New York Times and Reddit. Image generation tools like Midjourney and Adobe Firefly require billions of images to train them and have skirted similar copyright issues while gobbling up as much free public data as they can find. 

Because LLMs require such a massive amount of data, it is impossible to curate those data sets to only what we may deem as “true” facts or the “perfect” images. Even if we were able to curate these training sets, who makes the determination of what to include or exclude?

The training data would need to be free of bias and free of sarcasm (a very human trait) for it to be reliable and useful. We’ve seen this play out with sometimes hilarious results. Google “A.I. Overviews” have told people to put glue on pizza to prevent the cheese from sliding off or to eat one rock a day for vitamins & minerals. Researchers and journalists traced these suggestions back to the training data from Reddit and The Onion.

Information architects have a saying: “All Data is Dirty.” It means no one creates “perfect” data, where every entry is reviewed, cross-checked for accuracy, and evaluated by a shared set of objective standards. Human bias and accidents always enter the data. Even the simple act of deciding what data to include (and therefore, which data is excluded) is bias. All data is dirty.

Bias & flawed data leads to the perpetuation of stereotypes

Many of the drawbacks of A.I. are interrelated — All data is dirty is related to D.E.I. Gender and racial biases surface in the answers A.I. provides. A.I. will perpetuate the harms that these biases produce as they become easier and easier to use and more and more prevalent. These harms are ones which society is only recently grappling with in a deep and meaningful way, and A.I. could roll back much of our progress.

We’ve seen this start to happen. Early reports from image creation tools discuss a European white male bias inherent in these tools — ask it to generate an image of someone in a specific occupation, and receive many white males in the results, unless that occupation is stereotypically “women’s work.” When AI is used to perform HR tasks, the software often advances those it perceives as males more quickly, and penalizes applications that contain female names and pronouns.

The bias is in the data and very, very difficult to remove. The entirety of digital written language over-indexes privileged white Europeans who can afford the tools to become authors. This comparably small pool of participants is also dominantly male, and the content they have created emphasizes white male perspectives. To curate bias out of the training data and create an equally representative pool is nearly impossible, especially when you consider the exponentially larger and larger sets of data new LLM models require for training.

Further, D.E.I. overflows into environmental impact. Last fall, the Fifth National Climate Assessment outlined the country’s climate status. Not only is the U.S. warming faster than the rest of the world, but they directly linked reductions in greenhouse gas emissions with reducing racial disparities. Climate impacts are felt most heavily in communities of color and low incomes, therefore, climate justice and racial justice are directly related.

Flawed data leads to “Hallucinations” & harms Brands

“Brand Safety” and How A.I. can harm Brands

Brand safety is the practice of protecting a company’s brand and reputation by monitoring online content related to the brand. This includes content the brand is directly responsible for creating about itself as well as the content created by authorized agents (most typically customer service reps, but now AI systems as well).

The data that comes out of A.I. agents will reflect on the brand employing the agent. A real life example is Air Canada. The A.I. chatbot gave a customer an answer that contradicted the information in the URL it provided. The customer chose to believe the A.I. answer, while the company tried to say that it could not be responsible if the customer didn’t follow the URL to the more authoritative information. In court, the customer won and Air Canada lost, resulting in bad publicity for the company.

Brand safety can also be compromised when a 3rd party feeds A.I. tools proprietary client data. Some terms and condition statements for A.I. tools are murky while others are direct. Midjourney’s terms state,

“By using the Services, You grant to Midjourney […] a perpetual, worldwide, non-exclusive, sublicensable no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute text and image prompts You input into the Services” 

Midjourney’s Terms of Service Statement

That makes it pretty clear that by using Midjourney, you implicitly agree that your data will become part of their system.

The implication that our client’s data might become available to everyone is a huge professional risk that Oomph avoids. Even using ChatGPT to provide content summaries on NDA data can open hidden risks.

What are “Hallucinations” and why do they happen?

It’s important to remember how current A.I. chatbots work. Like a smartphone’s predictive text tool, LLMs form statements by stitching together words, characters, and numbers based on the probability of each unit succeeding the previously generated units. The predictions can be very complex, adhering to grammatical structure and situational context as well as the initial prompt. Given this, they do not truly understand language or context. 

At best, A.I. chatbots are a mirror that reflects how humans sound without a deep understanding of what any of the words mean. 

A.I. systems are trying its best to provide an accurate and truthful answer without a complete understanding of the words it is using. A “hallucination” can occur for a variety of reasons and it is not always possible to trace their origins or reverse-engineer them out of a system. 

As many recent news stories state, hallucinations are a huge problem with A.I. Companies like IBM and McDonald’s can’t get hallucinations under control and have pulled A.I. from their stores because of the headaches they cause. If they can’t make their investments in A.I. pay off, it makes us wonder about the usefulness of A.I. for consumer applications in general. And all of these gaffes hurt consumer’s perception of the brands and the services they provide.

Poor A.I. answers erode Consumer Trust

The aforementioned problems with A.I. are well-known in the tech industry. In the consumer sphere, A.I. has only just started to break into the public consciousness. Consumers are outcome-driven. If A.I. is a tool that can reliably save them time and reduce work, they don’t care how it works, but they do care about its accuracy. 

Consumers are also misinformed or have a very surface level understanding of how A.I. works. In one study, only 30% of people correctly identified six different applications of A.I. People don’t have a complete picture of how pervasive A.I.-powered services already are.

The news media loves a good fail story, and A.I. has been providing plenty of those. With most of the media coverage of A.I. being either fear-mongering (“A.I. will take your job!”) or about hilarious hallucinations (“A.I. suggests you eat rocks!”), consumers will be conditioned to mistrust products and tools labeled “A.I.” 

And for those who have had a first-hand experience with an A.I. tool, a poor A.I. experience makes all A.I. seem poor. 

A.I.’s appetite for electricity is unsustainable

The environmental impact of our digital lives is invisible. Cloud services that store our lifetime of photographs sound like featherly, lightweight repositories that are actually giant, electricity-guzzling warehouses full of heat-producing servers. Cooling these data factories and providing the electricity to run them are a major infrastructure issue cities around the country face. And then A.I. came along.

While difficult to quantify, there are some scientists and journalists studying this issue, and they have found some alarming statistics: 

While the consumption needs are troubling, quickly creating more infrastructure to support these needs is not possible. New energy grids take multiple years and millions if not billions of dollars of investment. Parts of the country are already straining under the weight of our current energy needs and will continue to do so — peak summer demand is projected to grow by 38,000 megawatts nationwide in the next five years

While a data center can be built in about a year, it can take five years or longer to connect renewable energy projects to the grid. While most new power projects built in 2024 are clean energy (solar, wind, hydro), they are not being built fast enough. And utilities note that data centers need power 24 hours a day, something most clean sources can’t provide. It should be heartbreaking that carbon-producing fuels like coal and gas are being kept online to support our data needs.

Oomph’s commitment to 1% for the Planet means that we want to design specific uses for A.I. instead of very broad ones. The environmental impact of A.I.’s energy demands is a major factor we consider when deciding how and when to use A.I.

Using our Values to Guide the Evaluation of A.I.

As we previously stated, our company values provide a lens through which we can evaluate A.I. and look to mitigate its negative effects. Many of the solutions cross over and mitigate more than one effect and represent a shared commitment to extracting the best results from any tool in our set

Smart

Driven

Personal

1% for the Planet

In Summary

While this article feels like we are strongly anti-A.I., we still have optimism and excitement about how A.I. systems can be used to augment and support human effort. Tools created with A.I. can make tasks and interactions more efficient, can help non-creatives jumpstart their creativity, and can eventually become agents that assist with complex tasks that are draining and unfulfilling for humans to perform. 

For consumers or our clients to trust A.I., however, we need to provide ethical evaluation criteria. We can not use A.I. as a solve-all tool when it has clearly displayed limitations. We aim to continue to learn from others, experiment ourselves, and evaluate appropriate uses for A.I. with a clear set of criteria that align with our company culture. 

To have a conversation about how your company might want to leverage A.I. responsibly, please contact us anytime.


Additional Reading List