Architect/Executive Advisor to Cloud-dwelling enterprises and startups whose dreams match the size of their
technology challenges.

Andy Blum
Founder & Chief Software Architect
“I’ve pioneered low/no-code techniques that bring rapid development to bet-the-farm projects. During the pandemic, I assembled a team of top talent to build these techniques into a single platform.”
A 30+ year IT industry veteran who has served as Principal Architect for numerous Fortune 500 and government projects. From the first data warehouse dashboards to the latest Big Data/Cloud projects, I’ve seen the data landscape change significantly. However, despite massive leaps in technology, there are common principles that underlie data management and engineering. I’ve tapped into these principles to build a suite of low/no-code software that take the hassle out of building data products.
Our software accelerators are a combination of patent-pending software techniques and methodology that can establish a strong Data Mesh and Data Fabric within an organization, all with less risk than traditional consulting models.
You can now use these accelerators to:
- Rapidly create data pipelines (Custom or built on top of industry tools)
- Build an awesome SaaS product
- Develop custom data products for the Data Mesh and Data Fabric
Trusted Architect to the following enterprises:

Led architecture effort to modernize the data pipeline involved with billing radio, tv, nightlife venues, etc on behalf of musicians. Offloaded salesforce processing to AWS and created initial data warehouse

Thought Leader involved with governing Data Engineering activities including a multi-year Big Data migration plan from Teradata to Hadoop and AWS for AI/ML processing.

Led Architectural treatment of a Hortonworks solution utilizing Phoenix, Kafka, and Hive.
Designed discrete Java micro-services utilizing components that produced and consume Kafka topics

Participated early on in their Big Data journey building custom frameworks to offload processing from Teradata into Hive on Hadoop. Developed processing targeting algorithm to ascertain the queries that would benefit the most from a move from MPP to a distributed database.

Created a metadata repository, which consolidated data, related to tables, columns, tasks/jobs, queries, users, and machines from over 800 databases including DB/2, Oracle, and SQL Server

Led data models development and data access services for a system to automate NYC water and sewer permitting systems. Re-Architected four existing databases into a single unified model

During the financial upheaval of 2009 several companies merged. We replaced a mainframe data integration topology that processed over 1,000 files with a modern meta data driven integration system

Developed dashboards for their first Data Warehouse circa 1997. The dashboards were based on a custom cube algorithm that allowed users the flexibility to choose various hierarchical layers of a snowflake schema to create a dynamic cross-tabulations.

Designed and developed a flexible metadata driven dashboard that was utilized for their first data warehouse.

Original designer for back-end systems that allowed for efficient warehouse management of trucks, pallets, and delivery routes.