Dave Hillman
living long and perspiring...
Some things to think about...
If you don't understand how things work, then don't do it.
If you have something to say, say it.
Work smarter, not harder!
It's always darkest before it goes pitch black.
Rules often seem to make good sense until you find that breaking them is not always as bad as once thought.
Located under the clear blue skys of North Carolina.
Semi, kind-of sort-of retired ... worked most of my life as a software engineer...now I spend my days figuring out things I never had time for when I was working.
Writing code - because I can!
Taught technology (programming, web...) courses at The Johns Hopkins University for over 20 years. Would like to do more teaching.
Wrote a book, magazine articles. Focused on blogging and working on a book.
USAF Veteran!
Backgound in...
Artificial Intelligence including expert systems, neural networks, and LLMs.
Software Engineering, both waterfall and agile.
Web Stack Development, front and back.
Project Management, planning and monitoring matters!
Data Engineering because it has to be done,
and
spoiling my granddaughter.
Personal Projects...
< Localized LLMs (L3M) >
Localized Large Language Models (LLMs or L3Ms) are computational models that mimic how text is understood and used to simulate human intelligence.
Identify and evaluate LLMs for definitions, categorization, and summarization...running on localized CPU-based architectures.
ChatGPT and cloud-based LLMs are hardware resource intensive. Running LLMs on local machines means you're not subject to being connected all the time, have more privacy, and have the flexibility to change models as they improve.
Capabilities include responding to questions, categorization, document summarization, keyword (general, name, place) extraction, definition generator, sentiment analysis, outputting to JSON.
Current Focus: integration with other tools for queries and storing results.
Technologies used
- LLMs
- Python
- LLM Support Libraries
- Web Stack
< Enhanced Entity Attribute Value (EEAV) >
EEAV is a very flexible model for capturing and exploiting data sets.
EEAV focuses on data-driven application development.
Starting with a table of data (CSV, JSON), ingest it, then manipulate it for a variety of pruposes. Any 2-dimensional data set can be captured in 3 tables: dataset, attributes, and values.
A secondary aspect of this effort is a web-based environment that makes it easily accessible and available.
Current Focus: developing ways to integrate into other applications quickly and efficiently.
Technologies used
- Python
- Web Stack
- Postgres, SQLite
- JavaScript, HTML, CSS
- Tabulator (table viewer)
Current Software Projects
I'm currently building a set of tools that demonstrate capabilities that could standalone or operate as an integrated tool set.
Media Ingest: load and extract text from video, audio, PDF, office documents and text.
Parser: extract words and sentences from text input.
Processor: apply analytic processes via Large Language Models including categorization, summarization, extraction, and sentiment analysis.
Viewer: examine a complex JSON-based data structure.
Presentation: Present JSON-encoded data to tabular, charts (e.g., bar, line, circle), graphs (node/edge), and maps.
These tools are developed using Python, Flask, with a web-based front-end (HTML, CSS, and JavaScript/AJAX).
History of Computing Book Project...
My favorite class in college was "The History of Computing". I really enjoyed the narrative of how things came to be. So I'm writing a book about computing technology from 1950 to 2025.
The main writing is finished, now it's about editing, editing, and more editing. The graphics are done, but I may rework a few of them.
I've explored different technologies from programming languages through the Internet to quantum computing. Also covered some history focusing on technology and how we deal with information in 1950 v. 2025.
Goal is to finish this by June 2025 (on target).