headstorm navigation logo

AgTech

Data Pipelines and Marketplaces

Improving agronomic data quality and creating a single source of truth

The project is currently ongoing (multi-year maturations). We’re continuing to improve data quality, reduce errors, and find additional insights for our client to expand the platform into a growing ecosystem

90% less errors

From 10% error rate to less than 1%

Better data valuation

Clear metrics on the quality and usage of data

Much faster project completion

Increased velocity and significantly better component/service reuse

Leveraged specialized expertise

Easy teaming with more nuanced technology e.g., Scala allowed core teams to stay focused

Data Pipelines and Marketplaces

Our client needed the expertise and capability to group their data into a central hub, a next generation marketplace and lighthouse for future partners to integrate data pipeline and IoT devices.

Team
  • Project/product mgmt.
  • Architecture
  • Development
  • QA
 
Build type

Greenfield

Tech stack
  • Business Logic – Scala
  • Concurrency – Akka
  • Data Storage -PostgreSQL
  • Platform – AWS/Docker/IoT
Man kneeling and inspecting lettuce crops in a field holding an iPad

What we did

Assess and aggregate data

We worked with cross functional teams across the company to inventory data locations and industry-specific activities while exploring access and quality.

  • Created comprehensive data map, including access via services and 3rd party dependencies
  • Unified consistent data model and formats

Adding scale and flexibility

We built robust and scalable API’s that handled 100K’s of requests per second.

  • Distributed next gen in-memory data storage and retrieval
  • Observable system diagnostics monitoring

Improving data quality

Data tracking was built into monitor a large volume of transaction every second.  The right tools were selected and implemented to meet the transactional and scale demands.  To ensure data accuracy machine learning was used to find patterns, classify data, and promote data integrity.

  • Resilient full-featured data lineage
  • Proactive data quality alerts and notifications

Standards and change management

We created a working group to clarify best practices and introduced new coding patterns and libraries for future build and deploy efforts

Cookies Content

This website uses cookies to ensure you get the best experience on our website. By continuing to use our site, you agree to the use of cookies. Read more about our privacy policy.