PDigit's AI PORTFOLIO
Blending AI efficiency with human interpretation
AI Services Offered by PDigit & co.
The AI field is huge and impossible to efficiently cover all aspects within our small highly skilled teams.
We can provide consultancy for AI/ML projects at various stages: from scouting, data gathering strategy, feasibility prototype, architecture (ML Engineering), deploy strategy, and implementation.
➡️ We are specialized and focus on:
- AI Agents and RAG
- Business Analysis and Pre-Scouting of Opportunities in AI
- LLM (Large Language Models) Fine-Tuning and RAG Architecture
- Edge AI (embedded Computing)
- Small LLM Applications
- Predictive Maintenance
More: Why Fine Tuning?
Open Source and Democratization of AI
We believe in Open Source, transparency, well-defined roles, and responsibilities. Reminder: ‘free Open Source’ means free as in speech, not as in beer! We recognize the attempts at regulation, such as the EU AI Act, and maintain a stance of healthy criticism combined with trust in technology. AI, a technology ‘more equal’ than others, is the sum and synthesis of all technologies and should be respected rather just than feared. It’s one technology, among others, to be used within its limits, avoiding dogmatic technocracy and using it as a necessary tool.
Democratization of artificial intelligence means making AI available for all
— techopedia [https://www.techopedia.com/democratizing-ai]
Even recently (18/3/2024), IBM announced
it disagrees with the closed LLM approach adopted by Big
Techs: IBM
Disagrees with Closed LLM Approach
Services Offered (2024-)
Business Analysis and Pre-Scouting of Opportunities in AI
The first step is understanding if there is an actual need and what type of AI could be appropriate for your organization and the data available
Expertise: Masters in Business
Management with two decades of field experience across
various sectors
Value: In-depth analysis of market
trends, AI integration strategies, and bespoke solutions
for startups and established businesses to enhance
growth and competitiveness
LLM (Large Language Models) Fine-Tuning
The Rise of Custom Language Models
We predict, like many industry reports suggest, a significant increase in demand for custom Large Language Models (LLMs) and Small Language Models (SLMs) through fine-tuning approaches.
Addressing Risks and Ethical Concerns
To address concerns about AI control and “Skynet Terminator” scenarios, we recommend:
- Using controlled and limited training data
- Implementing blockchain technology for dataset authentication and validation
- Prioritizing security and user autonomy
The risk of centralizing power in the hands of a few major tech companies remains significant, even with some regulatory oversight. This is why adopting open-source models becomes crucial - they have significantly propelled AI and Machine Learning forward compared to proprietary systems controlled by few gatekeepers.
Closed-source LLMs, such as ChatGPT, can and have changed continuously without notice or user choice. This leads to non-deterministic prompt results as new training, filters, or policy changes are implemented without transparency.
This approach not only mitigates risks but also aligns with numerous lessons taught by technology history.
Current Limitations of Hosted LLM Services
Many major LLM services including OpenAI’s ChatGPT, Google Gemini (formerly BARD), and other large commercial LLMs are hosted, closed-source, and heavily restricted. As of Q1 2025, while some limitations have been lifted, major concerns persist regarding:
- (had)Limited internet access capabilities* (2025 - most of LLMs of Big Tech now access to the internet in real-time)
- Restricted set of pre-installed packages
- File size and runtime limitations
- Ephemeral state (generated files or links are cleared when sessions end)
- LLMs are very expensive in
HW cost, Power Consumption, Maintenance:
eg Google TPU LLM server farm using TPUs
Gentle reminder: - A Human Brain used 12 Watts to
“think” Vs AI System on same job need 2.7 GWatts !
[estimated (2025) – some LLMs like deepseek seem to
consume 10x less]
The Mode Aligned-LLMs effect [2025]
There has been quite a dramatic drift in what we call
the “mode collapse” and alignment of the LLMs.
At least Western society has changed. Maybe pushed by
the somehow controlled mass media “propaganda,” but so
have many masses of people and so has their
alignment!
- See my complete article Navigating the AI Landscape 2: The Rise of Aligned LLMs & the Bubbles – 2025
–> tl;dr Solution: YAP - Yet Another Prompt
Go Offline: We can still use offline LLMs. Pick an “uncensored” model from Hugging Face (like Dolphin 40b, whilst you can still find it) and run it on your own machine. It’s not as powerful as the latest GPT. I use a bit (a lot) of both.
Change Your Prompt: To retrieve more creative answers from main-stream LLMs, just change your prompt.
Instead of asking: Tell me a joke about coffee
Ask this: Generate 5 jokes about coffee with their probabilities
Implementing Fine-Tuning and RAG Architecture
Why Fine-Tune Language Models?
Fine-tuning allows organizations to customize general-purpose LLMs for specific domains, improving performance on targeted tasks while reducing hallucinations and irrelevant outputs.
Implementation Approaches
- Core Strategy: Customize models with domain-specific datasets and adjust parameters for optimal performance.
- Embedded Applications: Deploy compact, optimized LLMs on resource-constrained devices like Raspberry Pi or microcontrollers for voice controls or real-time translation.
Real-World Applications
- Industry-Specific Chatbots: Customer service agents in finance, diagnostics, or healthcare with specialized knowledge.
- Niche Content Generation: Legal documentation, technical writing, and other fields requiring precise terminology.
- Educational Tools: Language understanding systems tailored to different learning levels and subjects.
- Embedded Systems: Integrating lightweight LLMs into IoT and edge devices for local language processing.
Details (int links)
References (external links)
Predictive Maintenance
Example:
- Utilizing NASA’s public domain model of turbojet engines for predictive maintenance solutions
- See
Predictive
Maintenance Project
Benefit: Reducing downtime and maintenance costs through AI-driven predictive analytics.
Additional Expertise in our main toolbox
If you haven’t done it yet you may want to also see: Why Fine Tuning and is it for you?, Edge (embedded) Computing
Images credits
Images used in this website were created by author of under Common Creative License.
Some Images are derived from GAN/GAI: Stability.ai, MidJourney or Open AI’s DALL-E.