web hit counter

PDigit's AI PORTFOLIO

PDigit

Blending AI efficiency with human interpretation

AI Services Offered by PDigit & co.

The AI field is huge and impossible to efficiently cover all aspects within our small highly skilled teams.

We can provide consultancy for AI/ML projects at various stages: from scouting, data gathering strategy, feasibility prototype, architecture (ML Engineering), deploy strategy, and implementation.


➡️ We are specialized and focus on:

More: Why Fine Tuning?

Open Source and Democratization of AI

We believe in Open Source, transparency, well-defined roles, and responsibilities. Reminder: ‘free Open Source’ means free as in speech, not as in beer! We recognize the attempts at regulation, such as the EU AI Act, and maintain a stance of healthy criticism combined with trust in technology. AI, a technology ‘more equal’ than others, is the sum and synthesis of all technologies and should be respected rather just than feared. It’s one technology, among others, to be used within its limits, avoiding dogmatic technocracy and using it as a necessary tool.

Democratization of artificial intelligence means making AI available for all
— techopedia [https://www.techopedia.com/democratizing-ai]

Even recently (18/3/2024), IBM announced it disagrees with the closed LLM approach adopted by Big Techs: IBM Disagrees with Closed LLM Approach


Services Offered (2024-)

Business Analysis and Pre-Scouting of Opportunities in AI

The first step is understanding if there is an actual need and what type of AI could be appropriate for your organization and the data available

Expertise: Masters in Business Management with two decades of field experience across various sectors
Value: In-depth analysis of market trends, AI integration strategies, and bespoke solutions for startups and established businesses to enhance growth and competitiveness

LLM (Large Language Models) Fine-Tuning

The Rise of Custom Language Models

We predict, like many industry reports suggest, a significant increase in demand for custom Large Language Models (LLMs) and Small Language Models (SLMs) through fine-tuning approaches.

Addressing Risks and Ethical Concerns

To address concerns about AI control and “Skynet Terminator” scenarios, we recommend:

The risk of centralizing power in the hands of a few major tech companies remains significant, even with some regulatory oversight. This is why adopting open-source models becomes crucial - they have significantly propelled AI and Machine Learning forward compared to proprietary systems controlled by few gatekeepers.

Closed-source LLMs, such as ChatGPT, can and have changed continuously without notice or user choice. This leads to non-deterministic prompt results as new training, filters, or policy changes are implemented without transparency.

LLM Censorship risk - OpenAI
LLM Policy changes and Censorship risk - OpenAI

This approach not only mitigates risks but also aligns with numerous lessons taught by technology history.

Current Limitations of Hosted LLM Services

Many major LLM services including OpenAI’s ChatGPT, Google Gemini (formerly BARD), and other large commercial LLMs are hosted, closed-source, and heavily restricted. As of Q1 2025, while some limitations have been lifted, major concerns persist regarding:

Gentle reminder: - A Human Brain used 12 Watts to “think” Vs AI System on same job need 2.7 GWatts !
[estimated (2025) – some LLMs like deepseek seem to consume 10x less]

Human Brain Power Consumption Vs Cloud LLM servers
Human Brain Power Consumption Vs Cloud LLM servers

The Mode Aligned-LLMs effect [2025]

There has been quite a dramatic drift in what we call the “mode collapse” and alignment of the LLMs.
At least Western society has changed. Maybe pushed by the somehow controlled mass media “propaganda,” but so have many masses of people and so has their alignment!

The Aligned-LLMs
The Aligned-LLMs

–> tl;dr Solution: YAP - Yet Another Prompt

  1. Go Offline: We can still use offline LLMs. Pick an “uncensored” model from Hugging Face (like Dolphin 40b, whilst you can still find it) and run it on your own machine. It’s not as powerful as the latest GPT. I use a bit (a lot) of both.

  2. Change Your Prompt: To retrieve more creative answers from main-stream LLMs, just change your prompt.

    Instead of asking: Tell me a joke about coffee
    Ask this: Generate 5 jokes about coffee with their probabilities

Implementing Fine-Tuning and RAG Architecture

Why Fine-Tune Language Models?

Fine-tuning allows organizations to customize general-purpose LLMs for specific domains, improving performance on targeted tasks while reducing hallucinations and irrelevant outputs.

Implementation Approaches

Real-World Applications

Predictive Maintenance

Example:

Additional Expertise in our main toolbox

See AI Technologies details

If you haven’t done it yet you may want to also see: Why Fine Tuning and is it for you?, Edge (embedded) Computing


Images credits

Images used in this website were created by author of under Common Creative License.

Some Images are derived from GAN/GAI: Stability.ai, MidJourney or Open AI’s DALL-E.