“The past 20 years were about making environmental data high quality and accessible; the next 20 will be about making it truly intelligent.”

On March 6, 2006, Mark Packard helped launch de maximis Data Management Solutions, better known as ddms, in St. Paul, Minnesota, and has led the company since its inception.

What led to the creation of a company focused solely on environmental data? What were the big challenges in that industry 20 years ago? And what will they be in the next 20 years?

Interview with Mark Packard, CEO

How did ddms get started?

I was running the Environmental Management Information Systems department at a company called Summit Envirosolutions, Inc. when we decided to spin off a new company with de maximis, Inc., one of our clients.

What we kept seeing was how long it took for project professionals to access analytical results and interpret the data. People working on large environmental remediation sites (like Superfund groups and large petroleum portfolios) would put in requests for tables and maps and wait weeks to get products back. 

But they needed quick access to the data to make decisions and take action, like evaluating the effectiveness of a remediation approach or preparing regulatory reports. Waiting extended periods of time for data simply wasn’t practical.

Seeing that problem, we took the very novel approach (at that time) of building and refining web delivery applications. That flipped the status quo on its head by making it possible for corporations to get faster, direct access to the expensive data generated through sampling and analysis.  The tool we built was called Project Portal, a SaaS platform we’ve continued to evolve over the years. 

Along with that innovative technology, we also introduced consulting services with pricing based on the value delivered, rather than the traditional time-and-materials approach. Once we tackled that need, we expanded into independent data validation services, helping clients ensure the quality and integrity of their environmental data. 

From the very beginning, our goal was to stay independent, meaning not becoming part of an engineering firm that would sell multiple services. Our focus has always been to build deep scientific expertise and stay dedicated to helping organizations share, secure, and use their environmental data more effectively.

What were the biggest challenges/trends 20 years ago?

Twenty years ago, the norm for environmental data was hand-created data products delivered on paper, or maybe (best case) as a PDF.

We knew the fastest way to get data to decision-makers would be through the internet, but in 2006 the technology was still fairly basic. We were limited by infrastructure on both sides: our systems as hosts and our clients as end users. Processing power, disk space, and disk read speeds created another set of bottlenecks. 

To address those challenges, we built and optimized our own server racks and maintained all the components ourselves. As fun as that was, we were happy to embrace the advancements that eventually came with cloud hosting and computing.

Beyond technology, a cultural shift was required. At the time, the industry was accustomed to waiting days or weeks for data products. Even though faster access seems obvious today, it took time to help users expect, and trust, more direct access to their environmental data through web-based tools.

Another challenge was working with laboratories to deliver consistent analytical results as electronic data deliverables (EDDs). That ecosystem evolved slowly, but over time it became the backbone of modern environmental data management systems.

What do you see coming up in the industry in the next 20 years?

With the advent of AI, we are seeing what will become the biggest force shaping the next 20 years of our industry unfolding right now. We’ve talked about AI for years, but I’m not sure anyone anticipated just how quickly it would evolve or how broadly it would impact the way we work.

One major shift will be how quickly environmental data management systems can be implemented. With the increased use of standards, best practices, mature software platforms, and AI helping automate much of the heavy lifting, organizations will be able to spin up quality data management systems in a fraction of the time it takes today.

We will also see these systems integrate with other corporate information systems from day one, including financial, operations, and reporting and analysis platforms. That kind of integration will create a much richer organizational knowledge base.

AI will also help to interpret the data, connect disparate datasets, and identify patterns (something we are working on today). Organizations will be able to make meaningful, informed decisions in months where today it can take years or even decades. 

This will also have a significant impact on the workforce. As more task-heavy work becomes automated, new roles will emerge further up the value chain, and the nature of the work will certainly evolve. 

Finally, within the environmental remediation industry, I believe AI will play a role in developing new remediation technologies that can better evaluate, target, and remediate environmental contamination. As we’ve seen with compounds like PFAS, we should also expect the continued emergence of new contaminants that will challenge existing monitoring and remediation approaches. Unlocking those advances will require new ways to interrogate and leverage what is quickly becoming one of the most valuable assets organizations possess today, their environmental data.


Today, ddms, Inc. employs 65+ people around the country, with about one-third based in the Twin Cities metro area. With a global client list across multiple industries, the team has added to its data management expertise with chemistry and geospatial analysis and quality reviews. In addition to working with software and cloud partners like AWS, Esri and EarthSoft, ddms built and manages its own proprietary project management software, Project Portal, to deliver its services.