
In this Leadership Interview, Dr. Martial Michel, Chief Technologist and Vice President of AI & Data Sciences at Infotrend, shares his perspective on what it takes to deliver responsible, mission-aligned AI in the federal space. Drawing on deep technical expertise and real-world federal experience, Dr. Michel discusses how Infotrend’s TREND values shape its approach to innovation, why curiosity and hands-on work matter for the next generation of technologists, and how responsible AI practices can coexist with rapid experimentation. His insights offer a thoughtful look at leadership, technology, and the discipline required to build secure, scalable solutions that truly serve federal missions.
Read on to explore his perspective on responsible AI, technical leadership, and what it takes to deliver mission-ready solutions for federal clients.
What sets Infotrend apart when it comes to delivering AI and data science solutions to federal clients?
“Infotrend distinguishes itself in delivering Artificial Intelligence (AI) and Data Science (DS) solutions to federal clients through a mission-driven philosophy anchored in its TREND values: Trust, Responsibility, Excellence, Nimble, and Drive. These principles shape not only how Infotrend builds technology, but how it partners with federal agencies that operate in complex, high-stakes environments.
At the foundation is Trust: Infotrend positions itself as a reliable, transparent partner that truly listens to each agency’s mission, constraints, and operational realities. Rather than pushing one-size-fits-all technologies, we invest time in understanding federal priorities, risk profiles, and compliance requirements, building long-term relationships grounded in credibility and shared mission alignment.
Responsibility is equally core to our federal approach. Infotrend emphasizes responsible stewardship of sensitive government data and the responsible design of AI and Machine Learning (ML) systems, encompassing model transparency and auditability, secure cloud architectures, and governance frameworks. Our work spans Investigative Services, Natural Language Processing (NLP), DS, Computer Vision (CV), forecasting models, supervised and unsupervised learning, and large-scale data processing. In real-world federal settings, this has included analyzing billions of financial and operational records using secure cloud pipelines, deploying models via AWS SageMaker and Redshift, and surfacing insights through mission-ready dashboards. Our responsibility-first mindset ensures these solutions are not only technically advanced but also compliant, ethical, and operationally reliable.
The value of Excellence is reflected in Infotrend’s technical depth. We provide end-to-end data services (eDiscovery, Digital Forensics, data ingestion, cleansing, normalization, integration, and governance) as well as advanced analytics and ML engineering. This ensures federal clients are not merely handed a black box but receive a fully built analytics ecosystem. Our platform-agnostic stance further elevates this excellence: Infotrend selects the right tools for each mission rather than anchoring clients to a single vendor, enabling optimal performance, lower long-term risk, and seamless integration with existing federal technology stacks.
Being Nimble is another critical differentiator. Federal missions evolve rapidly, and Infotrend embraces agile methodologies, rapid prototyping, iterative development, and flexible delivery models. Infotrend develops in-house solutions to support the growing Research & Development needs of most agencies. Infotrend’s CoreAI (https://github.com/Infotrend-Inc/CoreAI) is a containerized environment supporting CUDA-enabled deep learning, TensorFlow, PyTorch, and OpenCV. The platform provides a mission-ready, foundational container that enables reproducible data-science workflows and integrates with best-in-class frameworks for prototyping and deploying advanced analytics. Infotrend’s CoreAI Demo Projects (https://github.com/Infotrend-Inc/CoreAI-DemoProjects) provide hands-on demonstrations across diverse domains (AI, NLP, DS, CV, etc.), offering end-users clear examples built on Infotrend’s CoreAI stack. We openly share reference projects to enable rapid experimentation using the tool.
Finally, Infotrend’s Drive, its forward-leaning commitment to innovation and mission success, pushes our teams to continually explore new technologies and introduce modern architectures that advance agency objectives. This drive ensures that solutions are not only functional today but positioned for future scaling, modernization, and evolving mission landscapes. Infotrend’s leadership plays an active and visible role in the federal technology community, particularly through its strong engagement with the American Council for Technology – Industry Advisory Council (ACT-IAC) and other government-industry forums. Company executives regularly present at Data & AI events, contribute to working groups, and participate in community initiatives that advance emerging-technology collaboration across government. Infotrend frequently promotes these engagements on LinkedIn, underscoring its commitment to thought leadership and public-sector innovation, demonstrating its dedication to sharing expertise, strengthening partnerships, and elevating discussions around Investigative Service, AI, and mission-driven modernization.
InfoTREND combines Trust, Responsibility, Excellence, Nimble execution, and Drive with deep Investigative Service and AI/ML expertise, and a federal-first mindset. Infotrend delivers differentiated, high-impact solutions that are secure, scalable, and mission-aligned.
What is a piece of advice you have for someone starting out in this industry? 
“Starting a career in AI, DS, or modern software engineering is an exciting journey, and one of the most valuable traits a candidate can bring is curiosity.
The strongest candidates aren’t always those who know the latest frameworks, but those who genuinely enjoy exploring how things work, why they work that way, and what might work better. Curiosity drives experimentation, and experimentation builds real understanding; the kind that stands out in interviews and, more importantly, on real projects.
Along with curiosity, approach new tools, models, and ideas with an analytical eye. Don’t just accept that something is “state of the art” because someone else says so; test it: understand why one approach succeeds where another fails. This mindset is invaluable in the workplace, where the “best” solution aligns with the mission, constraints, and timeline. Interviewers look for candidates who know the technology, can think critically, and adapt quickly as technology evolves.
Another major differentiator for early-career professionals is having a meaningful public GitHub portfolio. This doesn’t mean uploading classroom assignments or templated tutorials; it means showcasing projects they truly understand: can explain, defend, and walk through confidently during an interview. Whether it’s a data pipeline, a tuned model, a small open-source contribution, or an experiment comparing approaches, the goal is to demonstrate that the candidate can build, test, reason, and iterate. A thoughtful GitHub shows interviewers that the candidate doesn’t just consume content; he/she can create and apply it.
Curiosity, analytical thinking, and authentic, hands-on work demonstrate an ability to solve real problems and to take on increasingly complex responsibilities. That’s precisely the kind of candidate employers are excited to bring onto their teams.”
How do you balance innovation with responsible AI use, especially when working with sensitive federal data?
“Balancing innovation with responsible AI practices, especially when working with sensitive federal data, starts with creating the right environment for experimentation. One of the most effective approaches is to build an on-premises test bed populated with synthetic or fully anonymized data that mimics the structure, behavior, and complexity of the real dataset. This allows teams to explore new models, architectures, and workflows safely, without exposing mission-sensitive information. Within a controlled test environment, it is possible to prototype rapidly and evaluate cutting-edge techniques while maintaining strict compliance and data-handling integrity.
A critical element of this process is developing a deep understanding of the dataset itself. Responsible AI begins long before model training: it starts with knowing what the data represents, who it affects, and where potential biases may live. By inspecting distributions, lineage, sampling patterns, and historical artifacts, it is possible to anticipate risks, design more equitable models, and prevent blind spots that could lead to misleading outcomes. This level of understanding is fundamental in federal contexts, where decisions may impact public services, security, compliance, or citizen benefits.
Innovation also thrives when grounded in standards-based practices. Leveraging established frameworks (NIST AI Risk Management Framework, agency-specific governance policies, MLOps standards, reproducibility guidelines, etc.) ensures that advances are defensible, auditable, and aligned with federal expectations. Standards serve as guardrails that keep experimentation productive rather than chaotic.
Finally, the most effective solutions emerge through iterative development with peers. Collaboration among data scientists, engineers, domain experts, and mission stakeholders allows ideas to be challenged, refined, and strengthened. Peer reviews, model audits, red-team testing, and shared evaluations help evolve a prototype into a trustworthy, mission-ready solution. This process ensures that innovation isn’t a solo effort but a disciplined, team-driven journey toward a better, more responsible outcome.”
What do you think is the most underutilized or misunderstood Infotrend capability and how could it be better leveraged?
“One of Infotrend’s most underutilized and often misunderstood capabilities is Infotrend’s CoreAI container ecosystem (both the CoreAI container platform and its companion Demo Projects). Many teams see CoreAI as simply a development environment, when in reality it is a strategic accelerator; a building block designed to compress the time, cost, and complexity of building mission-ready AI, ML, NLP, or CV solutions for federal clients.
Infotrend’s CoreAI (https://github.com/Infotrend-Inc/CoreAI) provides a single, standardized, CUDA-enabled container (with a corresponding CPU-only build) that consolidates the full-stack of modern DS/ML tooling (TensorFlow, PyTorch, OpenCV, Jupyter, MLOps utilities, visualization libraries, and more) into a reproducible environment that works the same on-premise, in classified enclaves, or in the cloud. This consistency eliminates one of the most significant friction points in AI adoption: the time spent recreating or troubleshooting environments rather than solving mission problems. With CoreAI, teams can focus immediately on modeling, experimentation, and analytics rather than setup and configuration. The container is designed to run in non-privileged mode, making it far safer and better suited for federal environments with strict security controls. Its seamless integration with Podman, a rootless, security-focused container engine, enables the platform to be deployed on-premises or in enclaves without requiring elevated permissions. This combination allows teams to experiment with advanced AI/ML tooling while fully aligning with zero-trust and least-privilege principles.
The platform becomes even more powerful when investigating the blueprints available as Jupyter Notebooks in the CoreAI DemoProjects (https://github.com/Infotrend-Inc/CoreAI-DemoProjects), which provide practical, end-to-end examples of the tool’s domains of use. Infotrend is releasing videos on LinkedIn that demonstrate its technologies; be sure to check them out.
Infotrend’s CoreAI is a foundational building block. The container can be used as a Docker “FROM” to build more complex solutions. Embedding it into early project phases, proof-of-concept efforts, and innovation sprints can significantly speed time-to-impact by leveraging the baseline AI sandbox and enabling analysts, engineers, and data scientists to collaborate within a consistent environment.”I
How do you stay current on advancements in research, tools, and regulations?
“Staying current in AI, DS, and federal technology requires a deliberate, multi-layered approach that blends research awareness, regulatory monitoring, and community engagement. One of the most effective strategies is active participation in industry forums, working groups, and public-private partnerships (such as ACT-IAC), where practitioners and government leaders openly discuss emerging challenges, new tools, and policy directions. These forums provide real-time insight into how innovations are being applied across agencies and into evolving compliance or governance considerations.
On the research side, I use AI-focused browsers and topic-aggregation tools that curate daily updates across ML, CV, and responsible AI. These platforms surface new papers from arXiv, breakthroughs from major research labs, open-source releases, and practical engineering write-ups. Having this steady stream of information and a natural curiosity allows me to quickly scan trends, identify patterns, and dive deeper into areas that most directly impact federal missions.
Equally important is staying aligned with regulatory and standards-setting bodies. I regularly monitor publications from NIST, including updates to the AI Risk Management Framework, guidance on GenAI, cybersecurity standards, and evaluation methodologies. I also track releases from OMB, DHS, DoD, and other agencies shaping AI governance and compliance expectations.
By combining community engagement, curated research intake, and regulatory vigilance, I maintain a balanced, up-to-date perspective that informs sound technical and mission-focused decisions.
Finally, I participate in forums and events in the Northern Virginia and DC areas (or anywhere in the world), sometimes as an attendee, and at other times as a presenter or panelist. If the topics discussed in this interview pique your curiosity or if you are looking for a presenter, let’s talk.”
Dr. Michel on Staying Current on Technology Advancements
Dr. Michel’s perspective underscores the importance of curiosity, discipline, and responsibility in building AI solutions that are not only innovative but also trusted, secure, and mission-ready. Visit our Articles page to catch up on our previous leadership interviews with Jason Gwinn (VP of Operations & Program Delivery) and Jesus Jackson (CTO), and keep an eye out for the next installment in our Leadership Series at the end of February.



