<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Dvloper Blog]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://blog.dvloper.io/</link><generator>Ghost 5.71</generator><lastBuildDate>Thu, 07 May 2026 13:49:44 GMT</lastBuildDate><atom:link href="https://blog.dvloper.io/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Q1 2026 Events Highlights]]></title><description><![CDATA[<p>Key European Cybersecurity Events &#x2014; January to March 2026</p><p>The first quarter of 2026 marked an intense period of engagement for our teams across Europe. Our people participated as speakers, presenters, and technical contributors in four major cybersecurity events &#x2014; spanning Greece, Italy, and Romania &#x2014; all tied to EU-funded</p>]]></description><link>https://blog.dvloper.io/q1-2026-events-highlights-2/</link><guid isPermaLink="false">69f996b37eb4870001b69f85</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Tue, 05 May 2026 10:02:49 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2026/05/8513-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2026/05/8513-1.jpg" alt="Q1 2026 Events Highlights"><p>Key European Cybersecurity Events &#x2014; January to March 2026</p><p>The first quarter of 2026 marked an intense period of engagement for our teams across Europe. Our people participated as speakers, presenters, and technical contributors in four major cybersecurity events &#x2014; spanning Greece, Italy, and Romania &#x2014; all tied to EU-funded projects in which we are active consortium partners. Here is a summary of what took place and how we contributed.</p><p></p><h2 id="1-cyberguard-%E2%80%94-1st-general-assembly-meeting"><strong>1. CYBERGUARD &#x2014; 1st General Assembly Meeting</strong></h2><p>29 January 2026&#xA0; |&#xA0; Thessaloniki, Greece&#xA0; |&#xA0; International Hellenic University (IHU)</p><p>The CYBERGUARD Consortium launched its First General Assembly Meeting at Thessaloniki City Hall, hosted by the International Hellenic University (IHU). The event brought together over 40 participants and marked the first major project milestone since implementation began on 1 December 2024.</p><p>CYBERGUARD aims to strengthen Security Operations Centres (SOCs) through AI-driven technologies and an efficient Cyber Threat Intelligence (CTI) sharing framework, improving the resilience of SOCs against complex and evolving attack vectors. The meeting covered project objectives and guidelines, management progress, AI-driven cybersecurity system design, CTI and offensive strategies, threat detection achievements, and technical partner demonstrations. Partners also reviewed Pilot Use Cases in terms of usability, performance, and operational relevance.</p><p>Dr. Mihai P&#x102;UN (I-ENERGYLINK), consortium coordinator, opened the meeting. Mrs. Malgorzata Agata KOWALSKA (ECCC Project Officer) joined remotely to address the consortium. The project is funded under the Digital Europe Programme and brings together 13 partners from Cyprus, Greece, Spain, and Romania.</p><p><strong>Our contribution</strong></p><p>DVLOPER was represented by Tudor Chihaia, Mihai Chihaia, and R&#x103;zvan Georgescu. The team delivered a live demonstration of the CYBERGUARD Dashboard to consortium partners, showcasing its current state and capabilities. They also conducted a technical demonstration of Suricata intrusion detection rules, explaining their structure and operational logic within the CYBERGUARD architecture.</p><p></p><h2 id="2-intersoc-%E2%80%94-2nd-general-assembly-meeting"><strong>2. INTERSOC &#x2014; 2nd General Assembly Meeting</strong></h2><p>10 February 2026&#xA0; |&#xA0; Terni, Italy&#xA0; |&#xA0; Hosted by ASM TERNI</p><p>The INTERSOC (INTERconnected Security Operation Centres) project held its 2nd General Assembly in Terni, Italy. The project focuses on disruption preparedness and resilience of digital infrastructures through advanced threat forecasting, cyber-incident detection and response, and the development of a user-centric intelligent threat defence platform.</p><p>The agenda included work package progress updates, Final Review Planning, a visit to the ASM TERNI pilot site, and a dedicated workshop &#x2014; &quot;Overall INTERSOC Architecture, Main Solutions and Demos&quot; &#x2014; covering the technology stack, integration interfaces, dashboard, and SOC-targeted system architecture. A workshop on Pilot Testing and Evaluation also took place.</p><p>INTERSOC is coordinated by EXIMPROD ENGINEERING (RO), funded by the ECCC under the Cybersecurity and Trust Programme (Grant No. 101145853), and gathers 13 partners from Spain, Italy, Greece, Cyprus, and Romania.</p><p><strong>Our contribution</strong></p><p>DVLOPER was represented by Louis Sardarescu and Adrian Batanu. Louis presented the latest Dashboard updates to the consortium, walking partners through the current development status and upcoming features. Adrian participated in the technical working discussions around the SOC Connector, contributing alongside the other technical partners in the consortium to define integration approaches and next steps.</p><p></p><h2 id="3-secur-eu-%E2%80%94-2nd-general-assembly-meeting"><strong>3. SECUR-EU &#x2014; 2nd General Assembly Meeting</strong></h2><p>12 February 2026&#xA0; |&#xA0; Terni, Italy&#xA0; |&#xA0; Hosted by ASM TERNI</p><p>Two days later, also in Terni, the SECUR-EU Consortium held its 2nd General Assembly. The project &#x2014; &quot;Enhancing Security of European SMEs in Response to Cybersecurity Threats&quot; &#x2014; focuses on open-source security solutions for SMEs, white-hack testing via the HackOlympics initiative, and improving cybersecurity preparedness across the SME market.</p><p>The agenda covered work package progress, Final Review Planning, an architecture and demos workshop, and an open debate on SME training activities and stakeholder involvement &#x2014; addressing capacity-building strategies and how to maximise the project&apos;s reach and sustainability. A Pilot Testing and Evaluation session was also held.</p><p>SECUR-EU is coordinated by EXIMPROD ENGINEERING (RO), funded under the ECCC Cybersecurity and Trust Programme (Grant No. 101128029), and gathers 14 consortium partners and 2 supporting organisations from across Europe.</p><p><strong>Our contribution</strong></p><p>DVLOPER was represented by Carol Bazga and Adrian Batanu, both actively engaged in the pilot use case discussions. Carol also delivered a dedicated presentation on the status of DVLOPER&apos;s pilot &#x2014; the Distributed IDS System Deployed on the Edge for IoT Cyber Attack Detection &#x2014; covering current implementation progress, technical findings, and next steps within the SECUR-EU validation framework.</p><p></p><h2 id="4-cra-europe-2026-%E2%80%94-cyber-resilience-in-action"><strong>4. CRA EUROPE 2026 &#x2014; Cyber Resilience in Action</strong></h2><p>4 March 2026&#xA0; |&#xA0; Romanian Parliament, Bucharest&#xA0; |&#xA0; CYBERFORT Consortium, coordinated by I-ENERGYLINK</p><p>The flagship event of the quarter was the CRA EUROPE 2026 conference, held at the Romanian Parliament in the Hall Nicolae IORGA. The event was organised by the CYBERFORT Consortium and coordinated by I-ENERGYLINK, with the support of the Romanian National Cyber Security Directorate (DNSC). It drew over 200 participants and 35 speakers and moderators from across Europe.</p><p>The event focused on the practical implementation of the Cyber Resilience Act (CRA) and its implications for SMEs, public authorities, and critical sectors. Three thematic sessions addressed CRA compliance frameworks, the role of standardisation, European policy perspectives, CYBERFORT project outcomes, pilot use case showcases, and the path from compliance to capability. Key representatives from ENISA, DNSC, ELECTRICA, ASRO, Deloitte, and the Authority for the Digitalization of Romania (ADR) were among the contributors.</p><p>Dr. Mihai P&#x102;UN (I-ENERGYLINK, CYBERFORT Coordinator) opened the conference, framing the event as a transition point: &quot;CRA EUROPE 2026 is moving from Policy to Practice, from Compliance to Capability, and from Ambition to measurable Cyber Resilience.&quot;</p><p><strong>Our contribution</strong></p><p>Mihai Chihaia, CIO of DVLOPER, participated as a speaker in Session 2 &#x2014; &quot;Cybersecurity Projects, CRA Compliance &amp; European Policy Perspectives&quot; &#x2014; as the Manufacturer/Vendor voice on the panel. His intervention addressed three interconnected themes:</p><ul><li>The standards gap as the primary compliance blocker &#x2014; with Type C product-specific standards still pending publication and SBOM format not yet mandated, manufacturers face the risk of building compliance processes that require rework once final standards are issued.</li><li>The underestimated complexity of SBOM &#x2014; arguing that an SBOM is not a document but a living, automated process embedded in CI/CD pipelines, particularly challenging for products built on deep open-source dependency stacks (such as DVLOPER&apos;s MultiCloud platform, which integrates 30+ OSS tools).</li><li>SME resource asymmetry as the harmonisation gap &#x2014; large enterprises can absorb compliance costs, while SMEs building digital products cannot staff dedicated compliance teams. EU-funded projects like CYBERFORT and CYBERGUARD are essential complements to regulation.</li></ul><p>In a solo intervention later in the session, Mihai also addressed how CRA compliance can become a competitive advantage &#x2014; positioning it as a market access strategy rather than a cost centre, drawing on DVLOPER&apos;s experience serving US-based enterprise clients through Broadcom&apos;s partner network.</p><p></p><p><strong>What Q1 2026 Meant for Us</strong></p><p>Across four events in three countries, our teams contributed technically, strategically, and publicly to some of the most important conversations in European cybersecurity today. From SOC architecture and intrusion detection to CRA compliance and SME readiness, Q1 2026 has reinforced our position as an active and relevant voice in the ecosystem. We look forward to continuing this engagement in the months ahead.</p>]]></content:encoded></item><item><title><![CDATA[Your AI Works in the Demo. The System Fails Under Load.]]></title><description><![CDATA[<p>In most enterprise deployments, the model is not the limiting factor. The failure appears at the system level, once the AI is exposed to real operational conditions.</p><p>In a controlled demo, inputs are predictable, context is bounded, and retrieval pipelines operate on clean, well-structured data. Latency is stable, and responses</p>]]></description><link>https://blog.dvloper.io/your-ai-works-in-the-demo-the-system-fails-under-load/</link><guid isPermaLink="false">69f1d034bc11700001cba4d9</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Wed, 29 Apr 2026 09:33:29 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2026/04/ChatGPT_Image_Apr_29_2026_12_41_05_PM_optimized_1000.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2026/04/ChatGPT_Image_Apr_29_2026_12_41_05_PM_optimized_1000.png" alt="Your AI Works in the Demo. The System Fails Under Load."><p>In most enterprise deployments, the model is not the limiting factor. The failure appears at the system level, once the AI is exposed to real operational conditions.</p><p>In a controlled demo, inputs are predictable, context is bounded, and retrieval pipelines operate on clean, well-structured data. Latency is stable, and responses are evaluated in isolation. Under these conditions, the system performs as expected.</p><p>Production introduces a different set of constraints.</p><p>Queries are less structured and often underspecified. Input data is distributed across systems with inconsistent schemas and varying levels of reliability. Retrieval pipelines must handle partial failures, timeouts, and conflicting signals. Context windows become a constraint as the system attempts to combine multiple sources into a coherent response.</p><p>These are not edge cases. They are the baseline conditions of real usage.</p><p>The architecture decisions made during development become visible at this stage. How the system prioritizes sources when signals conflict. How it maintains state across multi-step workflows. How it degrades when a dependency is unavailable. How it handles concurrent requests without compounding latency or reducing accuracy.</p><p>Most implementations are not designed for this level of complexity. They are optimized for single-step interactions, not for sustained, multi-step reasoning under load.</p><p>The result is predictable. Accuracy degrades as query complexity increases. Latency becomes inconsistent. Failure modes are unclear or poorly handled. Trust declines, and usage shifts toward low-risk scenarios.</p><p>The model has not changed. The environment has.</p><p>The difference between a working prototype and a reliable system is the architecture that accounts for these conditions &#x2014; before they surface in production.</p><blockquote><a href="https://dvloper.io/ai-factory?ref=blog.dvloper.io#contact" rel="noreferrer"><strong>Schedule an AI System Diagnostic</strong></a></blockquote><p>Or, if you want to understand how this is built in practice: <strong>See how the AI Factory works &#x2192; <a href="https://dvloper.io/ai-factory?ref=blog.dvloper.io">https://dvloper.io/ai-factory</a></strong></p>]]></content:encoded></item><item><title><![CDATA[The LLM Is the Easy Part: Why Ontology-Driven Agents Are the Only Ones That Survive Production]]></title><description><![CDATA[The model is a commodity. The layer that determines whether your enterprise agent survives production is the ontology underneath.]]></description><link>https://blog.dvloper.io/the-llm-is-the-easy-part-why-ontology-driven-agents-are-the-only-ones-that-survive-production/</link><guid isPermaLink="false">69ea0435655c3b00019c6ecc</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Fri, 24 Apr 2026 12:00:53 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2026/04/onthology--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2026/04/onthology--1-.png" alt="The LLM Is the Easy Part: Why Ontology-Driven Agents Are the Only Ones That Survive Production"><p><em>How we at dvloper.io build enterprise agentic AI that actually ships, and why the model isn&apos;t where your agent lives or dies.</em></p><p>Every enterprise AI program starts with the same enthusiasm and ends with the same question: &#x201C;Why does the demo look so sharp and the production pilot look like a toddler with a search engine?&#x201D;</p><p>We see this constantly. A team picks a model, wires it to a handful of tools, builds a nice chat UI, and ships something that crushes the happy-path scenarios the product owner demoed. Then it meets real enterprise data, real business rules, real ambiguity, and it collapses. Outputs drift. Trust erodes. The project quietly gets re-labeled as &#x201C;exploratory&#x201D; and everyone pretends the roadmap always had that caveat.</p><p>The thing nobody wants to say out loud is this: <strong>the LLM is not where your agent lives or dies.</strong> The model is a commodity. GPT, Claude, Gemini, Llama. Swap them, benchmark them, argue about them on Twitter. None of it will save an agentic system that doesn&apos;t understand your enterprise.</p><p>The layer that actually determines whether your agent survives contact with production is the one underneath the model. The one that tells it what your enterprise actually <em>means</em>.</p><p>That layer is an ontology. And building it properly is what we do at dvloper.io.</p><h2 id="what-a-production-grade-enterprise-agent-actually-requires"><strong>What a production-grade enterprise agent actually requires</strong></h2><p>When we talk about agentic AI at dvloper.io, we mean something very specific: a system that encodes the data, logic, actions, and security of the enterprise into a coherent semantic model, and then lets humans and agents operate on top of it with full fidelity. We call the capability we build for our clients the <strong>AI Agentic Factory</strong>, and it rests on four things, not one.</p><p>Strip away the buzzwords, and an enterprise-grade agent has to do four jobs simultaneously:</p><ul><li><strong>Encode the data of the enterprise.</strong> Unify the vast and fragmented sources of truth: CRM, ERP, systems of record, ticketing platforms, document stores, operational telemetry, into coherent objects, properties, and links. Not a dashboard. Not an API gateway. A single semantic model the agent can actually reason over.</li><li><strong>Capture the logic of the enterprise.</strong> The rules, constraints, and decision frameworks currently living in someone&apos;s head, in a PDF from 2022, or worse, buried in fifteen different stored procedures nobody has touched since the person who wrote them left. Encoded once. Consistent everywhere.</li><li><strong>Model the actions of the enterprise as first-class primitives.</strong> Not just &#x201C;the agent can generate an answer,&#x201D; but &#x201C;the agent can write back to the system of record, with the right approvals, the right audit trail, and the right rollback path.&#x201D; Simple transactions and multi-step workflows, both governed the same way.</li><li><strong>Govern both humans and agents under one security model.</strong> Same identity, same permissions, same audit logs, whether the actor is a senior analyst or an autonomous agent. No CISO is going to sign off on &#x201C;the LLM decided what was allowed.&#x201D;</li></ul><p>None of that is LLM work. All of it is semantic-layer work. And it is exactly what most agentic AI programs skip in the rush to ship a demo, which is exactly why most agentic AI programs stall between demo and production.</p><h2 id="why-agents-fail-without-a-semantic-layer"><strong>Why agents fail without a semantic layer</strong></h2><p>Let&apos;s be specific about what actually breaks when you skip the ontology.</p><p><strong>1. The same term means five different things.</strong> A &#x201C;customer&#x201D; in CRM is not the same as a &#x201C;customer&#x201D; in billing, is not the same as a &#x201C;customer&#x201D; in the churn model, is not the same as the entity your contract says you owe money to. Your agent sees all five and averages them.</p><p><strong>2. Business rules live in humans, not systems.</strong> The rule that &#x201C;we never onboard vendors from jurisdiction X without a Tier-2 compliance review&#x201D; exists in someone&apos;s head and in a PDF from 2022. Your agent has no idea.</p><p><strong>3. The agent can&apos;t tell when it doesn&apos;t know.</strong> This is the failure mode that destroys enterprise trust faster than any other. An agent that confidently answers with incomplete information is worse than no agent at all, because you stop checking it.</p><p><strong>4. Nothing is explainable.</strong> When the agent produces a decision, nobody can reconstruct why. No auditor will sign off on that. No compliance team will let it run unsupervised. No regulator will let it touch a regulated workflow.</p><p><strong>5. Every new use case is a from-scratch rebuild.</strong> Without a shared semantic backbone, each agent is its own snowflake. Pilots never become platforms. Year two looks exactly like year one except with more sunk cost.</p><p>These are not model problems. No amount of swapping from GPT-4 to Claude to Gemini to Llama will fix them. They are architectural problems, and the architecture is the ontology.</p><h2 id="how-we-actually-build-these-systems"><strong>How we actually build these systems</strong></h2><figure class="kg-card kg-image-card"><img src="https://blog.dvloper.io/content/images/2026/04/simple-schema.png" class="kg-image" alt="The LLM Is the Easy Part: Why Ontology-Driven Agents Are the Only Ones That Survive Production" loading="lazy" width="1536" height="1024" srcset="https://blog.dvloper.io/content/images/size/w600/2026/04/simple-schema.png 600w, https://blog.dvloper.io/content/images/size/w1000/2026/04/simple-schema.png 1000w, https://blog.dvloper.io/content/images/2026/04/simple-schema.png 1536w" sizes="(min-width: 720px) 720px"></figure><p>Here is the pattern we use at dvloper.io, and what makes the Agentic Factory approach different from &#x201C;hook an LLM to LangChain and hope.&#x201D;</p><p><strong>1. Ontology-first, model-second</strong></p><p>Before we pick the LLM, we model the domain. What are the entities? What are the relationships? What are the rules, constraints, and required properties? What does &#x201C;done&#x201D; look like for each workflow? What does every actor (human or agent) needs to know to make a safe decision?</p><p>This is boring, unglamorous, diagram-heavy work. It is also exactly what separates the systems that ship from the ones that don&apos;t.</p><p><strong>2. The agent&apos;s job is semantic translation, not answer generation</strong></p><p>Once the ontology exists, the LLM&apos;s job gets narrower than people expect. It is not &#x201C;generate the answer.&#x201D; It is: <em>take the user&apos;s request, map it to ontology concepts, identify what is being asked, and route to the right tools.</em></p><p>That is a much more tractable problem for an LLM. It is also much easier to make reliable, testable, and debuggable. Most of what people call &#x201C;hallucination&#x201D; in production agents is actually the LLM being asked to do a job the architecture should have handled for it.</p><p><strong>3. Tools are first-class citizens, and they are boring on purpose</strong></p><p>We build tools as small, deterministic, testable pieces of code that the agent composes into workflows. Boring tools are good tools. They fail predictably. They log cleanly. They don&apos;t hallucinate.</p><p>The tools that show up in every ontology-driven agent we build:</p><ul><li><strong>The Gap Capture tool.</strong> Identifies what is missing before the agent acts. If an onboarding request does not have a jurisdiction, the agent does not guess. It records the gap.</li><li><strong>The Question Generation tool.</strong> Turns gaps into targeted follow-up questions, grounded in the ontology rather than generic &#x201C;can you clarify?&#x201D; prompts. &#x201C;Is this vendor operating in a restricted jurisdiction?&#x201D; beats &#x201C;Tell me more about the vendor&#x201D; every time.</li><li><strong>The Assumption Logging tool.</strong> When a workflow has to proceed with a temporary assumption, the agent records it explicitly. Every decision becomes auditable. Every assumption becomes a conversation the team can later have.</li><li><strong>The Validation tool.</strong> Checks that requests and resolved entities satisfy ontology constraints <em>before</em> any action is taken. Every contract must be linked to a legal entity. Every change request must have a rollback path. Every product must have a capability owner. The ontology says so; the validator enforces it.</li><li><strong>The Reasoning tool.</strong> Applies rules over ontology relationships to derive facts the input didn&apos;t explicitly contain: eligibility, risk tier, dependency chains, blast radius. This is where ontology stops being a dictionary and starts being an inference engine.</li><li><strong>The Escalation tool.</strong> When the agent can&apos;t safely complete a task, it routes to a human <em>with full context</em> (gaps, assumptions, partial work, recommended next actions), not a generic ticket with &#x201C;agent failed, please investigate.&#x201D;</li></ul><p>These are the tools that make the difference between an agent that answers fast and an agent you would actually put in front of a regulator.</p><p><strong>4. Observability on top, governance on the side</strong></p><p>Every agent call, every tool invocation, every assumption, every escalation, all logged, queryable, auditable. This is not bolted on at the end. It is part of the architecture from day one. When an agent makes a decision six months from now, you can reconstruct exactly why.</p><p><strong>5. The platform compounds</strong></p><p>This is where the &#x201C;Factory&#x201D; in Agentic Factory matters. The first agent we build comes with a semantic model, a tool library, and observability plumbing. The second agent reuses all of it. By the third or fourth use case, the incremental cost of a new agent is a fraction of the first one.</p><p>That is when agentic AI stops being a project and starts being a capability.</p><h2 id="a-real-example-agent-driven-network-operations"><strong>A real example: agent-driven network operations</strong></h2><p>Some of our most demanding work lives in network operations: proactive monitoring, configuration validation, incident triage. Real-time, high-stakes, zero-tolerance-for-wrong-answers territory. An agent that confidently pushes a bad change into a production network does not produce a bug. It produces an outage with a name.</p><p>What does ontology look like here?</p><ul><li><strong>Entities:</strong> devices, interfaces, links, policies, tenants, service paths, SLAs, maintenance windows.</li><li><strong>Relationships:</strong> which device serves which tenant, which policy applies to which interface, which link is primary versus backup, which tenant shares which blast radius.</li><li><strong>Rules:</strong> what constitutes a valid configuration change, what requires tenant approval, what can be auto-remediated, what must never be touched outside a maintenance window.</li></ul><p>A traditional agent asked <em>&#x201C;Can we push this configuration change?&#x201D;</em> might confidently answer yes based on syntactic validation alone. That is dangerous.</p><p>An ontology-driven agent answers a different question entirely: <em>given the ontology of this network, is this change safe, for which tenants, with what downstream effects, and what assumptions is it making?</em></p><p>It might conclude:</p><p><strong>Change validation: Blocked.</strong></p><p><strong>Gaps detected:</strong></p><p>&#x2022; Affected tenant SLA window not confirmed</p><p>&#x2022; Rollback path for interface bundle not verified</p><p>&#x2022; Peer link redundancy currently degraded</p><p><strong>Questions requiring human input:</strong></p><p>&#x2022; Is this change scheduled inside the tenant&apos;s approved maintenance window?</p><p>&#x2022; Has the peer-side team acknowledged the redundant-link status?</p><p><strong>Assumptions logged:</strong></p><p>&#x2022; Tenant SLA tier inferred from most recent service catalog snapshot</p><p>&#x2022; Rollback path assumed to be the previously-deployed configuration baseline</p><p><strong>Recommended next actions:</strong></p><p>&#x2022; Route to on-call network engineer with full context</p><p>&#x2022; Hold change until peer-side redundancy restored</p><p>That is not a chatbot. That is a co-worker with enough context to be trustworthy, and enough self-awareness to escalate when it shouldn&apos;t proceed alone.</p><h2 id="the-business-case-said-plainly"><strong>The business case, said plainly</strong></h2><p>The reason this architecture is worth the upfront work is not theoretical. It shows up in five places:</p><p><strong>Trust.</strong> The agent shows what it knows, what it doesn&apos;t know, and why. Users stop second-guessing outputs, which means they actually start using them.</p><p><strong>Explainability.</strong> Every decision is grounded in ontology, rules, and traceable gaps. Regulators, auditors, and internal risk teams can follow the reasoning end-to-end.</p><p><strong>Reusability.</strong> The ontology becomes a shared asset across every agent and workflow you build. Second use case costs a fraction of the first.</p><p><strong>Governance.</strong> Assumptions, questions, unresolved issues, and escalations are explicitly captured, not hidden inside model weights you cannot inspect.</p><p><strong>Scalability.</strong> Different domain agents share the same semantic backbone. Your platform grows as a coherent system instead of a collection of disconnected pilots.</p><p>These are not theoretical benefits. They are the reason enterprises that adopt this pattern end up with agentic AI running in production, and the ones that don&apos;t end up with a graveyard of demos.</p><h2 id="the-closing-thought"><strong>The closing thought</strong></h2><p>LLMs are excellent at generating language. Enterprise decisions require structure, meaning, constraints, and the discipline to recognize when an answer is incomplete. One of those things is a commodity in 2026. The other is the work.</p><p>At dvloper.io, the work is what we sell. The ontology-driven agent is not a demo pattern for us. It is how we ship.</p><p>If you are stuck somewhere between a promising POC and a production agent you&apos;d actually trust with a regulated workflow, the missing layer is probably not a better model. It is the ontology underneath.</p><p>We would be happy to talk about yours.</p><p><em>Want to discuss where your agentic AI program is stuck, or how an ontology-first architecture would fit your stack? Get in touch at dvloper.io.</em></p><p><em>Razvan Georgescu, VP of Data &amp; AI, dvloper.io</em></p>]]></content:encoded></item><item><title><![CDATA[JiraAI: Turning a Decade of Jira Tickets Into an Answer Engine Your Team Can Actually Trust]]></title><description><![CDATA[<h2 id="the-problem-every-growing-engineering-organization-eventually-runs-into"><strong>The problem every growing engineering organization eventually runs into</strong></h2><p>At some point, every team stops being able to remember everything it has already learned. The knowledge is there sitting in ten thousand Jira tickets, buried in comment threads, resolution summaries, and carefully-written post-mortems of incidents from years ago. All of</p>]]></description><link>https://blog.dvloper.io/jiraai-turning-a-decade-of-jira-tickets-into-an-answer-engine-your-team-can-actually-trust/</link><guid isPermaLink="false">69df5b6b6096ae0001daf35a</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Wed, 15 Apr 2026 09:37:05 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1736953072477-bd26e3073d02?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fFdvb2RlbiUyMGNhcmQlMjBjYXRhbG9nJTIwZHJhd2VycyUyMHdpdGglMjBsYWJlbHN8ZW58MHx8fHwxNzc2MjQ1OTc4fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="the-problem-every-growing-engineering-organization-eventually-runs-into"><strong>The problem every growing engineering organization eventually runs into</strong></h2><img src="https://images.unsplash.com/photo-1736953072477-bd26e3073d02?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fFdvb2RlbiUyMGNhcmQlMjBjYXRhbG9nJTIwZHJhd2VycyUyMHdpdGglMjBsYWJlbHN8ZW58MHx8fHwxNzc2MjQ1OTc4fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="JiraAI: Turning a Decade of Jira Tickets Into an Answer Engine Your Team Can Actually Trust"><p>At some point, every team stops being able to remember everything it has already learned. The knowledge is there sitting in ten thousand Jira tickets, buried in comment threads, resolution summaries, and carefully-written post-mortems of incidents from years ago. All of it is searchable, technically. None of it is findable in the way a human actually asks questions.</p><p>New team members spend their first weeks asking questions that have been answered a dozen times before. Senior engineers get pulled off their own work to explain history they&#x2019;ve already explained, which isn&#x2019;t just a productivity tax it&#x2019;s one of the quieter reasons people burn out. Incidents drag on longer than they should because nobody can remember whether this exact symptom has happened before, or whether a similar issue in a different service was already root-caused and resolved.</p><p>The organization owns the answers. It just can&#x2019;t access them at the speed of conversation. That gap between &#x201C;we know this&#x201D; and &#x201C;we can retrieve this&#x201D; is worth closing. That&#x2019;s the problem JiraAI was built to solve at dvloper.io.</p><h2 id="why-we-built-our-own-product-layer"><strong>Why we built our own product layer</strong></h2><p>Before we wrote a line of code, we spent serious time evaluating <a href="https://github.com/infiniflow/ragflow?ref=blog.dvloper.io">RagFlow</a>, the open-source RAG engine that has become one of the most capable platforms in its category. It&#x2019;s genuinely impressive: deep document parsing, intelligent chunking, multi-modal retrieval, grounded citations, agent workflows, and <a href="https://ragflow.io/blog/ragflow-0.22.0-data-source-synchronization-enhanced-parser-agent-optimization-and-admin-ui?ref=blog.dvloper.io">native Jira support since v0.22.0</a>. For many teams, it&#x2019;s an excellent starting point. Three concerns ruled out a direct deployment for us.</p><h3 id="access-control-you-can-prove"><strong>Access control you can prove</strong></h3><p>Engineering managers and security leads don&#x2019;t just want &#x201C;a chatbot over Jira.&#x201D; They want to prove that a contractor on Project A has no path through the UI, through the API, or by guessing identifiers to data from Project B. RagFlow&#x2019;s open-source edition is designed around workspaces, not per-user per-knowledge-base grants. The <a href="https://github.com/infiniflow/ragflow/issues/2588?ref=blog.dvloper.io">maintainers confirmed this</a> when they closed the RBAC request in May 2025: granular permission control is a commercial-tier feature. For our use case, this wasn&#x2019;t a limitation we could work around it was the core of what we needed to deliver.</p><h3 id="roadmap-autonomy"><strong>Roadmap autonomy</strong></h3><p>Building entirely on a third-party platform means your product&#x2019;s future is tied to their release schedule and licensing decisions. Owning the application layer while leaning on RagFlow as the retrieval engine underneath meant we could ship the features our teams actually asked for, on our timeline, without waiting for an external roadmap to catch up.</p><h3 id="technology-vs-product"><strong>Technology vs. product</strong></h3><p>A RAG engine does one thing extraordinarily well: retrieve. A knowledge product is the onboarding flow that makes a junior engineer feel supported, the audit trail that satisfies a security review, the dashboard that justifies the budget. None of that ships inside a retrieval engine by default. JiraAI is the product. RagFlow is the engine. Keeping those two things separate, and letting each do what it&#x2019;s best at, is the most important architectural decision we&#x2019;ve made on this project.</p><h2 id="what-jiraai-delivers"><strong>What JiraAI delivers</strong></h2><h3 id="1-real-rbac-tied-to-your-existing-identity"><strong>1. Real RBAC tied to your existing identity</strong></h3><p>JiraAI integrates directly with Keycloak (or any OIDC-compliant IdP). Access is enforced at three independent levels: site-wide roles (<strong>site.admin</strong>, <strong>project.admin</strong>, <strong>project.user</strong>), per-user knowledge base grants for contractors and cross-functional contributors, and project-to-knowledge-base mappings that make tenancy clean and auditable. One Keycloak change grants access; one change removes it entirely. MFA, session policies, and password rotation are all inherited JiraAI can&#x2019;t accidentally weaken them. For a security review, this is the difference between a two-week conversation and a two-day one.</p><h3 id="2-an-admin-dashboard-for-decision-makers"><strong>2. An admin dashboard for decision-makers</strong></h3><p>Every early demo ended with leadership asking the same three questions: Is anyone actually using this? Are the answers good? What are we getting for what we&#x2019;re spending? The /admin view answers all three adoption by team and project trended over time, per-response helpfulness signals captured directly in chat, and LLM usage tied to specific tickets and projects. It&#x2019;s a product owner&#x2019;s view of whether the thing is working, not a RAG operator&#x2019;s view of embedding health and vector metrics.</p><h3 id="3-jira-data-understood-before-it%E2%80%99s-indexed"><strong>3. Jira data understood before it&#x2019;s indexed</strong></h3><p>Raw Jira tickets are noisy: triage summaries written before the problem was understood, low-signal comments (&#x201C;bump,&#x201D; &#x201C;retry,&#x201D; &#x201C;any update?&#x201D;), attachments in mixed formats, and resolutions buried in the last reply of a long thread. Feed that directly into a knowledge base and you get answers that reflect the noise.</p><p>JiraAI&#x2019;s ingestion pipeline runs every ticket through an AI enrichment step before it reaches the knowledge base restructuring it into what the problem was, what was tried, what was ruled out, and what finally worked. Atlassian Document Format comments are parsed correctly. A single ticket can fan out to multiple knowledge bases without duplicating the enrichment work. The result is cleaner retrieval and more accurate, more specific answers.</p><h3 id="4-project-centric-ux-and-a-resilient-pipeline"><strong>4. Project-centric UX and a resilient pipeline</strong></h3><p>Instead of navigating a list of opaque knowledge base identifiers, users start from a project, intuitive and familiar organizing unit they already work in every day. The assistant, access controls, and analytics all follow automatically. Behind the scenes, a decoupled producer/consumer pipeline handles incremental Jira polling, enrichment, and fan-out independently. If the downstream is temporarily unavailable, the producer keeps collecting and the consumer catches up. Single-ticket failures don&#x2019;t affect the batch. Idempotency is enforced at the database level. Reliability isn&#x2019;t a feature you market it&#x2019;s the absence of the outages you didn&#x2019;t have.</p><h2 id="how-a-question-becomes-an-answer"><strong>How a question becomes an answer</strong></h2><p>An engineer hits a familiar-looking Kafka consumer timeout and types: &#x201C;Have we seen this lag spike before, and what was the fix?&#x201D; JiraAI authenticates them via Keycloak, resolves their project memberships, and queries only the knowledge bases they&#x2019;re allowed to see the access boundary is enforced at the retrieval step, not just at the UI. Enriched ticket chunks are retrieved, re-ranked, and passed to the LLM with a carefully designed system prompt. The answer streams back with citations to specific tickets.</p><p>The engineer clicks through to an eleven-month-old ticket and finds the root cause in three minutes. They mark the response helpful that signal feeds the admin dashboard, where a product owner can see which teams are getting value and which knowledge bases have gaps. Nobody thought about knowledge base IDs, RAG configuration, or which model is powering the response. It just worked, and it was safe, and it was measurable.</p><h2 id="the-lesson"><strong>The lesson</strong></h2><p>Retrieval engines don&#x2019;t ship with the parts that make them products. Access control, audit trails, domain-specific data handling, a UX that fits how engineers think about their work none of that comes in the box, regardless of how good the underlying engine is. A thin interface on top of RagFlow would have passed the demo. It would not have passed the security review.</p><p>So we built those parts ourselves and layered them on top of RagFlow&#x2019;s retrieval quality, which we trust and don&#x2019;t have to maintain. The hard foundational work is someone else&#x2019;s problem. The product experience the part our teams open every morning is entirely ours to shape and iterate on.</p><h3 id="further-reading"><strong>Further reading</strong></h3><p>&#x2013;&#xA0; <a href="https://github.com/infiniflow/ragflow?ref=blog.dvloper.io">RAGFlow on GitHub</a></p><p>&#x2013;&#xA0; <a href="https://ragflow.io/blog/ragflow-0.22.0-data-source-synchronization-enhanced-parser-agent-optimization-and-admin-ui?ref=blog.dvloper.io">RAGFlow v0.22.0 data sources, admin UI, parser improvements</a></p>]]></content:encoded></item><item><title><![CDATA[From Assistant to Agent: Phase 3 of an Enterprise AI System at Fortune-10 Scale]]></title><description><![CDATA[<p>Dvloper is entering Phase 3 of a strategic AI collaboration with a Fortune 10 infrastructure leader, evolving an enterprise assistant into a more advanced agentic system designed to reason across complex internal knowledge and support real operational workflows in production environments.</p><p>Enterprise AI rarely fails during training. It fails in</p>]]></description><link>https://blog.dvloper.io/from-assistant-to-agent-phase-of-an-enterprise-ai-system/</link><guid isPermaLink="false">69af1e58abe6d00001775dac</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Tue, 17 Mar 2026 14:14:54 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2026/03/header--1-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2026/03/header--1-.jpg" alt="From Assistant to Agent: Phase 3 of an Enterprise AI System at Fortune-10 Scale"><p>Dvloper is entering Phase 3 of a strategic AI collaboration with a Fortune 10 infrastructure leader, evolving an enterprise assistant into a more advanced agentic system designed to reason across complex internal knowledge and support real operational workflows in production environments.</p><p>Enterprise AI rarely fails during training. It fails in production.</p><p>That reality has shaped our collaboration with a Fortune 10 infrastructure leader, where the goal was never to build another impressive assistant, but a system capable of navigating complex internal knowledge and supporting real operational workflows.</p><p>After successfully delivering Phase 1 and Phase 2, we are now entering Phase 3 - the biggest stage of the partnership so far.</p><h3 id="from-early-validation-to-deeper-operational-value"><strong>From Early Validation To Deeper Operational Value</strong></h3><p>The earlier phases of this collaboration were centered on building the right foundation: understanding the problem space, shaping the assistant architecture, integrating knowledge sources, and validating how AI could be applied in a way that was actually useful inside a real enterprise environment.</p><p>That part matters more than people think.</p><p>In enterprise settings, AI is not valuable just because it can answer questions. It becomes valuable when it can work within complex systems, reason through fragmented information, and produce outputs that are relevant, reliable, and aligned with how teams actually operate.</p><p>That is exactly where this collaboration has been heading.</p><p>With the first two phases successfully completed, with great feedback from the stakeholders of going beyond the initial scope - the project is now moving post foundational capability and into a more advanced stage focused on broader reasoning, stronger orchestration, and deeper operational fit.</p><h3 id="what-phase-3-is-about"><strong>What Phase 3 Is About</strong></h3><figure class="kg-card kg-image-card"><img src="https://blog.dvloper.io/content/images/2026/03/IMG_4642-1.JPEG" class="kg-image" alt="From Assistant to Agent: Phase 3 of an Enterprise AI System at Fortune-10 Scale" loading="lazy" width="1600" height="872" srcset="https://blog.dvloper.io/content/images/size/w600/2026/03/IMG_4642-1.JPEG 600w, https://blog.dvloper.io/content/images/size/w1000/2026/03/IMG_4642-1.JPEG 1000w, https://blog.dvloper.io/content/images/2026/03/IMG_4642-1.JPEG 1600w" sizes="(min-width: 720px) 720px"></figure><p>This next stage focuses on expanding the agentic capabilities of the platform so that it can do more than respond, it becomes a proactive agentic framework instead of a reactive agentic mechanism. It needs to reason across enterprise context, coordinate specialized workflows, and support more structured decision paths in environments where accuracy and traceability matter.</p><p>At a high level, this phase builds on the progress already made in areas such as:</p><ul><li>Agent orchestration for more structured task handling</li><li>Context aware reasoning across multiple internal knowledge sources</li><li>Improved routing between specialized flows and capabilities</li><li>Stronger support for complex operational investigations</li><li>A more production minded approach to integration, validation, and rollout</li></ul><p>The goal is not to build AI for the sake of AI.</p><p>The goal is to build a system that can genuinely support teams operating in complex technical environments, where the volume of information is high, the paths to resolution are rarely linear, and trust in the output matters just as much as speed.</p><h3 id="why-this-work-matters"><strong>Why This Work Matters</strong></h3><p>A lot of AI content today focuses on generic assistants and surface level automation. But enterprise reality is different.</p><p>Real internal systems are layered. Knowledge is distributed. Processes evolve over time. And the people using these tools are not looking for novelty; they are looking for practical value in their day to day workflows.</p><p>That is why this collaboration has been shaped around a more grounded engineering approach.</p><p>Instead of treating the assistant like a standalone chatbot, the system has been designed as a more structured, agent driven capability: one that can connect to enterprise knowledge, follow defined reasoning paths, and support users in a way that feels closer to an operational companion than a generic interface.</p><p>That distinction is important, especially in larger organizations where adoption depends on whether the solution can fit into real workflows, not just perform well in a controlled demo.</p><p>In most large organizations, operational teams deal with high volumes of structured and unstructured information coming from multiple systems that were never designed to talk to each other. The people doing the work are experienced - they know how to navigate the complexity - but they spend a disproportionate amount of time on the repetitive cognitive work: gathering context, cross-referencing sources, triaging what matters from what doesn&apos;t. That is not an automation problem. It is a <strong>reasoning problem</strong>. And that is where agentic AI actually starts to make sense - not as a replacement for expertise, but as a layer that handles the heavy lifting before a decision needs to be made.</p><h3 id="what-makes-this-phase-different"><strong>What Makes This Phase Different</strong></h3><p>What makes Phase 3 significant is not only the scale of the work, but the level of maturity behind it.</p><p>By this point, the collaboration is no longer about testing whether the idea has potential. That has already been demonstrated through the earlier phases. Phase 3 is about extending that success into a larger, more capable system with stronger real world value.</p><p>For our team, this is also the kind of work we care deeply about: combining modern AI frameworks with disciplined engineering, thoughtful architecture, and a strong understanding of how enterprise systems actually behave.</p><p>It is one thing to prototype an assistant.</p><p>It is another to design one that can evolve responsibly inside a large operational environment.</p><p>That is the challenge and the opportunity in front of us now.</p><h3 id="looking-ahead"><strong>Looking Ahead</strong></h3><p>We are excited to begin Phase 3 and continue building on the trust, momentum, and technical foundation established so far.</p><p>This next chapter represents more than just another milestone. It reflects the strength of a partnership built through delivery, iteration, and a shared commitment to building AI systems that are useful, reliable, and grounded in real operational needs.</p><p>For Dvloper, it is also a strong example of how we approach enterprise AI, not as a trend, but as an engineering discipline.</p><p>The companies that will lead with AI are not the ones moving fastest. They are the ones building systems that their teams actually trust.</p><p></p>]]></content:encoded></item><item><title><![CDATA[A Month-by-Month Skill Growth Retrospective]]></title><description><![CDATA[<p>My time at the dvloper.io academy was a transformative experience designed to bridge the gap between basic programming knowledge and enterprise-level software development. The primary objective of the program was to familiarize students with real-world applications, full-stack development, and professional workflows, all within a structured Agile methodology emphasizing continuous</p>]]></description><link>https://blog.dvloper.io/a-month-by-month-skill-growth-retrospective/</link><guid isPermaLink="false">69aeecc74e892d00014bc065</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Mon, 09 Mar 2026 15:53:24 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1517245386807-bb43f82c33c4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fHNraWxsfGVufDB8fHx8MTc3MzA3MTUxOXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1517245386807-bb43f82c33c4?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fHNraWxsfGVufDB8fHx8MTc3MzA3MTUxOXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="A Month-by-Month Skill Growth Retrospective"><p>My time at the dvloper.io academy was a transformative experience designed to bridge the gap between basic programming knowledge and enterprise-level software development. The primary objective of the program was to familiarize students with real-world applications, full-stack development, and professional workflows, all within a structured Agile methodology emphasizing continuous delivery and iterative improvement.</p><p>The curriculum followed two-week sprints, with tasks building on each other and increasing in complexity. Agile practices guided the process: sprint planning to set goals, bi-weekly meetings to stay aligned, code reviews for quality, and retrospectives to reflect and improve.&#xA0;Mentorship was a key part of the academy experience.&#xA0;</p><p>The first month at dvloper.io focused on laying a stable foundation for enterprise-level development. I began by setting up Virtual Machines, installing development tools, and configuring all dependencies required to run complex applications. Understanding the infrastructure behind these systems was equally important,&#xA0;I learned how enterprise applications rely on interconnected services and environments to function reliably. Alongside this, I worked on designing application workflows, planning scalable structures, and mapping how different components would interact. This phase was critical for building core technical skills, developing a structured approach to problem-solving, and learning how to manage tasks efficiently through the two-week sprint structure.</p><p>With the technical foundation in place, I moved on to backend development and enterprise data workflows. I focused on writing robust backend logic capable of handling complex business processes and designing REST APIs&#xA0;to facilitate smooth communication across application layers. Integrating these backend components into a cohesive system taught me to think in terms of system architecture and data flow management, reinforcing how enterprise applications are designed for scalability, reliability, and maintainability. By the end of this month, I felt confident tackling enterprise-level backend challenges and understood how backend services support the overall functionality of large-scale applications.</p><p>The third month emphasized full-stack development and integration. I shifted my focus to building interactive, responsive frontends and connecting them to the backend APIs I had developed. Testing and refining the end-to-end workflows gave me a clear view of how all components of an application come together to create a seamless user experience. This stage&#xA0;reinforced my understanding of full-stack development and highlighted the importance of integration and user-focused design in enterprise applications. By seeing how each part of the system interacts, I gained hands-on experience in managing complexity in a way that mirrors real-world professional environments.</p><p>The final month introduced enterprise DevOps practices, ensuring the applications we built could be deployed and maintained reliably. I learned how to package applications with containerization&#xA0;for consistent environments, deploy and manage them at scale using Kubernetes, and automate testing and deployment through CI/CD pipelines. This phase enhanced my skills in&#xA0;automation, deployment, and scalable system management&#xA0;, all essential for enterprise software development. It also demonstrated how modern development workflows rely on collaboration between developers, operations, and continuous delivery systems to maintain reliability in production environments.</p><p><strong>Takeaways</strong></p><p>The program provided an invaluable start to my career, equipping me with both the technical expertise and professional skills required in real-world software development. I am now&#xA0;confident&#xA0;to approach&#xA0;other&#xA0;challenges,&#xA0;and continuously improve in a professional environment.&#xA0;</p>]]></content:encoded></item><item><title><![CDATA[Elevating Code Quality with SonarQube at dvloper.io]]></title><description><![CDATA[<p>At dvloper.io, we believe that code quality is not just a nice-to-have. It is the foundation of sustainable software development. As our portfolio grew to include complex platforms like JIRA AI, NationalHR, and Backstage.io, we faced a common challenge: how do we maintain consistent code quality standards across</p>]]></description><link>https://blog.dvloper.io/elevating-code-quality-with-sonarqube-at-dvloper/</link><guid isPermaLink="false">6992fab510a5e00001af6205</guid><dc:creator><![CDATA[Gabriel Cosmin Bilciurescu]]></dc:creator><pubDate>Mon, 16 Feb 2026 11:09:21 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2026/02/lo-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2026/02/lo-1.jpeg" alt="Elevating Code Quality with SonarQube at dvloper.io"><p>At dvloper.io, we believe that code quality is not just a nice-to-have. It is the foundation of sustainable software development. As our portfolio grew to include complex platforms like JIRA AI, NationalHR, and Backstage.io, we faced a common challenge: how do we maintain consistent code quality standards across diverse teams and technologies?</p><p>SonarQube became our answer, a powerful static code analysis platform that has transformed how we approach quality assurance. In this article, I will share our journey of implementing SonarQube across multiple projects and the lessons we have learned along the way.</p><p><strong>The Challenge: Scaling Quality in Microservices</strong></p><p>Our flagship project, JIRA AI, is a multi-module Spring Boot microservices application comprising several interconnected services: a core REST API backend, Kafka message producers and consumers, shared utility libraries, and a React frontend. Each module has its own complexity, dependencies, and potential for technical debt.</p><p>Similarly, our work on NationalHR and our contributions to Backstage.io required a unified approach to quality that could scale with our ambitions. We needed a solution that would provide consistent standards without stifling the unique requirements of each project.</p><p><strong>Our Configuration Strategy</strong></p><p>The key to our successful SonarQube implementation lies in a multi-layered configuration hierarchy. At the root of each project, we define core properties: unified project identification, quality gate integration that ensures pipeline failures on violations, and secure environment-based token management. This centralized approach guarantees that all modules inherit the same baseline standards.</p><p>Each service module then maintains its own configuration for granular control, including targeted source and test directory mapping, Java version alignment per module requirements, intelligent exclusions for generated code and configuration files, and seamless JaCoCo test coverage integration. We maintain a minimum 80% test coverage threshold across all projects, which has significantly reduced our production bug rate.</p><p><strong>CI/CD Pipeline Integration</strong></p><p>For JIRA AI, NationalHR, and our other projects, we have implemented sophisticated three-stage GitLab CI pipelines covering build, SonarQube analysis, and deployment. Our configuration features intelligent caching for Maven dependencies and SonarQube results, full Git history access for accurate blame information, and branch-specific analysis with different rules for protected versus feature branches.</p><p>A crucial design decision was setting our SonarQube analysis to non-blocking mode. Quality issues are surfaced and tracked without preventing deployments, empowering teams to make informed decisions while maintaining delivery velocity.</p><p><strong>The Benefits We Have Realized</strong></p><p><strong>Automated Quality Gates:&#xA0;</strong>Our quality gates prevent technical debt accumulation by catching issues early. Across all projects, we have seen a measurable reduction in production incidents and maintain code duplication below 3%.</p><p><strong>Enhanced Security:&#xA0;</strong>SonarQube&apos;s vulnerability scanning helps us identify and remediate security issues before production. The OWASP compliance features provide actionable guidance for our security-conscious development practices.</p><p><strong>Developer Productivity:&#xA0;</strong>With IDE integration and real-time feedback during development, our teams catch issues before they even commit code. Pull request analysis provides automated quality feedback, significantly reducing code review burden.</p><p><strong>Lessons Learned</strong></p><p><strong>Strategic Exclusions Matter:&#xA0;</strong>Do not analyze everything. Exclude generated code, configuration files, and model/DTO classes from duplication detection. This focuses your quality metrics on code that actually matters.</p><p><strong>Start with Reasonable Thresholds:&#xA0;</strong>We began with achievable quality gate thresholds and gradually tightened them as our codebase improved. This prevented team frustration while still driving continuous improvement.</p><p><strong>Leverage Caching:&#xA0;</strong>Our caching strategy for both Maven dependencies and SonarQube analysis results significantly reduced pipeline execution times, which is essential for maintaining fast feedback loops.</p><p><strong>Conclusion</strong></p><p>Implementing SonarQube across JIRA AI, NationalHR, Backstage.io, and our other projects at dvloper.io has been transformative. By combining automated analysis, comprehensive coverage reporting, and seamless CI/CD integration, we have established a culture of quality that scales with our growth.</p><p>The multi-module configuration approach provides both unified oversight and granular control, which is essential for enterprise-grade applications. If you are looking to elevate your code quality practices, I encourage you to explore SonarQube. The investment in setup pays dividends in reduced bugs, improved security, and happier developers.</p><p>Have questions about our SonarQube implementation? Reach out to us at dvloper.io. We are always happy to share our experiences with the developer community.</p><p><strong>About the Author:&#xA0;</strong>Bilciurescu Gabriel is a software engineer at dvloper.io, where he focuses on building scalable microservices architectures and implementing DevOps best practices.</p>]]></content:encoded></item><item><title><![CDATA[Retrospective on the ESTEEC Olympics Hackathon]]></title><description><![CDATA[<p>The <strong>ESTEEC Olympics Hackathon</strong> has officially concluded, and the results were nothing short of impressive.</p><p>While the energy of a hackathon is often about the &quot;game&quot;, this event was rooted in a much larger mission. It served as a promotional platform for the <strong>Cyberguard project</strong>&#x2014;a major</p>]]></description><link>https://blog.dvloper.io/retrospective-on-the-esteec-olympics-hackathon/</link><guid isPermaLink="false">69281d0fa25b1b00013f06ed</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Thu, 27 Nov 2025 09:44:13 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2025/11/data-src-image-a86588ff-48b6-47fb-a4f7-e0faabfe12d5-1.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2025/11/data-src-image-a86588ff-48b6-47fb-a4f7-e0faabfe12d5-1.jpeg" alt="Retrospective on the ESTEEC Olympics Hackathon"><p>The <strong>ESTEEC Olympics Hackathon</strong> has officially concluded, and the results were nothing short of impressive.</p><p>While the energy of a hackathon is often about the &quot;game&quot;, this event was rooted in a much larger mission. It served as a promotional platform for the <strong>Cyberguard project</strong>&#x2014;a major European initiative where our team holds key development responsibilities.</p><h2 id="the-cyberguard-connection">The Cyberguard Connection</h2><p>Cyberguard (funded by the European Commission&#x2019;s Digital Europe Programme) is dedicated to <strong>&quot;Fortifying SOCs Against Evolving Cyber Threats.&quot;</strong> Our day-to-day work in this project involves developing advanced AI-driven technologies to protect critical infrastructure&#x2014;spanning energy, finance, and healthcare&#x2014;from sophisticated attacks.</p><p>We wanted this hackathon to mirror that level of technical rigor. The goal wasn&apos;t just to &quot;spread awareness&quot;, but to showcase the intense engineering reality of modern cybersecurity. We challenged participants to step into the shoes of the developers building the next generation of Security Operation Centers (SOCs).</p><h2 id="the-challenge-aiml-siem-for-pos-fraud">The Challenge: AI/ML SIEM for POS Fraud</h2><p>We tasked the teams with a highly specific, real-world problem: <strong>Building a custom SIEM (Security Information and Event Management) system for Point of Sale (POS) fraud detection.</strong></p><p>Participants had to ingest a continuous, high-velocity data stream, analyze it for anomalies, and visualize threats in real-time.</p><h2 id="engineering-under-pressure">Engineering Under Pressure</h2><p>To simulate the critical nature of the systems we build at Cyberguard, we introduced strict constraints:</p><ul><li><strong>30-second live checker:</strong> Security is time-sensitive. Once an event hit the stream, teams had exactly 30 seconds to detect the fraud and report it. This forced them to prioritize low-latency architecture over sluggish, heavy processing.</li><li><strong>Real Logic vs. Wrappers:</strong> We explicitly banned the &quot;lazy&quot; use of LLMs (simply sending data to a prompt). We demanded genuine algorithmic creativity, hybrid models and custom heuristics that demonstrated true engineering expertise.</li></ul><h2 id="from-data-to-decisions">From data to decisions</h2><p>On top of the AI models creation, the hackathon teams delivered on this front with robust dashboards that answered critical business questions instantly:</p><ul><li>What are the top 5 active fraud patterns?</li><li>Which age demographics are being targeted right now?</li><li>How does the current alert volume compare to previous hours?</li></ul><h2 id="summary">Summary</h2><p>This event was a successful extension of our work with Cyberguard. By bringing the complexity of critical infrastructure defense to a hackathon format, we didn&apos;t just promote the project&#x2014;we highlighted the vital importance of integrating AI and machine learning into the fabric of our digital security.</p><p>Congratulations to the winners, and thank you for helping us demonstrate what it takes to truly guard the grid.</p><figure class="kg-card kg-image-card"><img src="https://blog.dvloper.io/content/images/2025/11/data-src-image-a86588ff-48b6-47fb-a4f7-e0faabfe12d5.jpeg" class="kg-image" alt="Retrospective on the ESTEEC Olympics Hackathon" loading="lazy" width="1280" height="854" srcset="https://blog.dvloper.io/content/images/size/w600/2025/11/data-src-image-a86588ff-48b6-47fb-a4f7-e0faabfe12d5.jpeg 600w, https://blog.dvloper.io/content/images/size/w1000/2025/11/data-src-image-a86588ff-48b6-47fb-a4f7-e0faabfe12d5.jpeg 1000w, https://blog.dvloper.io/content/images/2025/11/data-src-image-a86588ff-48b6-47fb-a4f7-e0faabfe12d5.jpeg 1280w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://blog.dvloper.io/content/images/2025/11/data-src-image-c3d42593-b3b2-409e-9aa7-759a48d9aac5.jpeg" class="kg-image" alt="Retrospective on the ESTEEC Olympics Hackathon" loading="lazy" width="1600" height="1067" srcset="https://blog.dvloper.io/content/images/size/w600/2025/11/data-src-image-c3d42593-b3b2-409e-9aa7-759a48d9aac5.jpeg 600w, https://blog.dvloper.io/content/images/size/w1000/2025/11/data-src-image-c3d42593-b3b2-409e-9aa7-759a48d9aac5.jpeg 1000w, https://blog.dvloper.io/content/images/2025/11/data-src-image-c3d42593-b3b2-409e-9aa7-759a48d9aac5.jpeg 1600w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[The AiRo project - winner of the NASA Space Apps Challenge 2025]]></title><description><![CDATA[<p>We&#x2019;re proud to share that several members of our team won the <strong>NASA Space Apps Challenge 2025</strong>,&#xA0; the world&#x2019;s largest global hackathon for innovation using NASA data.</p><p>Their project, <strong>AiRo</strong>, stood out for its bold approach to one of the most pressing issues of our</p>]]></description><link>https://blog.dvloper.io/the-airo-project-winner-of-the-nasa-space-apps-challenge-2025-2/</link><guid isPermaLink="false">69008b9aff0a6600018fcdab</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Tue, 28 Oct 2025 09:29:02 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2025/10/Fje7QZaWIBU5I6y.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2025/10/Fje7QZaWIBU5I6y.jpg" alt="The AiRo project - winner of the NASA Space Apps Challenge 2025"><p>We&#x2019;re proud to share that several members of our team won the <strong>NASA Space Apps Challenge 2025</strong>,&#xA0; the world&#x2019;s largest global hackathon for innovation using NASA data.</p><p>Their project, <strong>AiRo</strong>, stood out for its bold approach to one of the most pressing issues of our time: <strong>air quality management</strong>.</p><h3 id="what-airo-does"><strong>What AiRo Does</strong></h3><p><strong>AiRo</strong> automates industrial air quality management by combining <strong>NASA TEMPO satellite data</strong> with <strong>AI-powered infrastructure analysis</strong>.</p><p>Traditionally, companies rely on manual environmental consulting - a process that can cost over <strong>&#x20AC;57,000 per year</strong>. AiRo replaces this with automated, real-time monitoring and reporting for about <strong>&#x20AC;10,800 annually</strong>, helping organizations save <strong>around &#x20AC;47,000 per year</strong> while improving environmental compliance and community safety.</p><h3 id="the-challenge-it-solves"><strong>The Challenge It Solves</strong></h3><p>According to the <strong>World Health Organization</strong>, 99% of people worldwide breathe polluted air. Many organizations still operate reactively - responding to pollution exceedances only after they happen.</p><p>AiRo changes that. It shifts industrial facilities from <strong>reactive compliance</strong> to <strong>proactive prevention</strong> by continuously analyzing air quality, infrastructure context, and risk factors around industrial sites.</p><p>When air pollution levels exceed thresholds, AiRo doesn&#x2019;t just send alerts - it can <strong>call managers directly</strong> through AI-powered voice notifications to ensure immediate action.</p><h3 id="how-it-works"><strong>How It Works</strong></h3><ol><li><strong>Infrastructure Analysis</strong> &#x2013; Uses OpenStreetMap and OpenAI Vision to understand the facility&#x2019;s surroundings: roads, schools, hospitals, and building density.</li><li><strong>Environmental Data Integration</strong> &#x2013; Combines NASA TEMPO satellite data (NO&#x2082;, HCHO, O&#x2083;, AQI) with local measurements and weather data.</li><li><strong>Contextual Risk Assessment</strong> &#x2013; Models how pollutants move and affect surrounding communities.</li><li><strong>AI Mitigation Planning</strong> &#x2013; Multi-agent AI systems recommend short-, medium-, and long-term actions with cost-benefit analyses.</li><li><strong>Automated Reporting</strong> &#x2013; Generates both detailed Markdown reports and executive PowerPoint presentations.</li><li><strong>Proactive Alerts</strong> &#x2013; Delivers dashboard notifications and <strong>AI-initiated phone calls</strong> when pollution thresholds are exceeded.</li></ol><h3 id="the-impact"><strong>The Impact</strong></h3><p>For <strong>organizations</strong>, AiRo means: &#x2013; Lower compliance costs and fewer consultant hours &#x2013; Prevention of violations, fines, and permit delays &#x2013; Access to data that supports grant and tax credit applications</p><p>For <strong>communities</strong>, it means: &#x2013; Cleaner air &#x2013; Reduced health risks &#x2013; Transparent, accessible air quality information</p><p>AiRo demonstrates how <strong>AI, automation, and NASA open data</strong> can come together to protect both the environment and the economy.</p><h3 id="built-by-the-team-at-dvloperio"><strong>Built by the Team at dvloper.io</strong></h3><p>The project reflects our team&#x2019;s engineering philosophy: <strong>solve real problems with clarity and precision</strong>.</p><p>AiRo&#x2019;s technical stack includes <strong>React</strong>, <strong>FastAPI</strong>, <strong>Kubernetes (K3s)</strong>, <strong>Longhorn distributed storage</strong>, and <strong>Keycloak SSO</strong>, running on <strong>Hetzner servers</strong> for scalability and performance. Its AI agents leverage <strong>OpenAI GPT-5</strong>, <strong>OpenAI Vision</strong>, and <strong>Retell AI</strong> for data analysis, visual interpretation, and natural-language phone alerts &#x2014; proving that AI can be both <strong>intelligent and actionable</strong>.</p><h3 id="why-this-matters"><strong>Why This Matters</strong></h3><p>Winning NASA Space Apps isn&#x2019;t just about recognition. It&#x2019;s about validation that <strong>deep tech can create measurable environmental and social impact</strong>.</p><p>AiRo helps industries become cleaner, smarter, and more responsible and we&#x2019;re proud of our people who were behind it!</p><p><strong>Explore the Project</strong></p><p>&#x1F30D; <strong>NASA: <a href="https://www.spaceappschallenge.org/2025/find-a-team/airo/?tab=project&amp;ref=blog.dvloper.io">Project Presentation</a></strong></p><p>&#xA0;&#x1F4C4; <strong>Project Report:<a href="https://1drv.ms/b/c/d180a4b6da2e219c/ESH98z8RO81IhmBWcHN-NUYBvSKyGqwFNBTLBUJLEJbVNg?e=v9ppyx&amp;ref=blog.dvloper.io"> View Presentation</a></strong></p><p>&#xA0;&#x1F4BB; <strong>GitLab Repository:<a href="https://gitlab.com/airo7375940?ref=blog.dvloper.io"> AiRo on GitLab</a></strong></p><p><strong>At dvloper.io</strong>, we believe great systems are never built in isolation.They&#x2019;re built by teams who see complexity as an invitation to innovate.</p><p>Congratulations, Bilciurescu Gabriel-Cosmin, Burea Mihai-Ovidiu, Mitran Andrei-Gabriel, Bazga Mihai-Carol, Pasaroiu Mihai! You did it!</p><p>Clarity. Collaboration. Code that matters.<br></p>]]></content:encoded></item><item><title><![CDATA[Super-Charge Your Ticket System with AI: Transform ServiceNow & JIRA Into an Intelligent Support Brain]]></title><description><![CDATA[<p>In today&apos;s fast-paced IT operations and development environment, teams are drowning in tickets, runbooks, and scattered knowledge across JIRA, and countless documentation repositories. What if you could transform your entire incident history, knowledge articles, and runbooks into a single, secure AI brain &#x2013; in less than a day?</p>]]></description><link>https://blog.dvloper.io/super-charge-your-ticket-system-with-ai-transform-servicenow-jira-into-an-intelligent-support-brain/</link><guid isPermaLink="false">68edf3cd2324210001ca1f7a</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Tue, 14 Oct 2025 06:58:16 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1684610529682-553625a1ffed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDl8fG5ldXJhbCUyMG5ldHdvcmslMjB2aXN1YWxpemF0aW9ufGVufDB8fHx8MTc2MDQyNTA4MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1684610529682-553625a1ffed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDl8fG5ldXJhbCUyMG5ldHdvcmslMjB2aXN1YWxpemF0aW9ufGVufDB8fHx8MTc2MDQyNTA4MXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Super-Charge Your Ticket System with AI: Transform ServiceNow &amp; JIRA Into an Intelligent Support Brain"><p>In today&apos;s fast-paced IT operations and development environment, teams are drowning in tickets, runbooks, and scattered knowledge across JIRA, and countless documentation repositories. What if you could transform your entire incident history, knowledge articles, and runbooks into a single, secure AI brain &#x2013; in less than a day?</p><p>Meet the&#xA0;<strong>AI Ticket Support Assistant by Dvloper.io</strong>&#xA0;&#x2013; a revolutionary solution that super-charges your ticket systems with enterprise-grade AI, turning decades of tribal knowledge into instant, actionable intelligence.</p><h2 id="the-opportunity-your-ticket-history-is-a-gold-mine">The Opportunity: Your Ticket History is a Gold Mine</h2><p>Every organization sits on a treasure trove of knowledge:</p><ul><li><strong>Years of Incident Resolution</strong>: Thousands of tickets with solutions that worked</li><li><strong>Expert Runbooks</strong>: Procedures and workflows refined through experience</li><li><strong>Knowledge Articles</strong>: Documentation created but rarely discovered when needed</li><li><strong>Tribal Knowledge</strong>: Critical insights locked in resolved tickets and comments</li><li><strong>Repetitive Patterns</strong>: The same issues being solved over and over by different teams</li></ul><p>Traditional ticket systems treat incidents as isolated events. But what if JIRA, and your other ticket systems could learn from every resolution, every runbook, and every knowledge article to become an intelligent support brain?</p><p><strong>The AI Ticket Support Assistant transforms your entire incident history into actionable intelligence &#x2013; automatically, continuously, and securely.</strong></p><h2 id="what-makes-this-different-enterprise-ai-done-right">What Makes This Different: Enterprise AI Done Right</h2><p>The AI Ticket Support Assistant isn&apos;t just another chatbot slapped onto your ticket system. It&apos;s a purpose-built, enterprise-grade platform with capabilities that set it apart:</p><h3 id="specialized-etl-for-ticket-systems"><strong>Specialized ETL for Ticket Systems</strong></h3><p>No more hand-rolled scripts or custom integrations. Our pre-built connectors automatically ingest:</p><ul><li><strong>JIRA</strong>: All major deployment types with comprehensive issue tracking</li><li><strong>Multiple Systems</strong>: Flexible integration with various ticket platforms</li></ul><p>Configure once, and data flows automatically through scheduled syncs or real-time webhooks.&#xA0;<strong>Deploy in less than a day.</strong></p><h3 id="hybridrag-graphrag-retrieval"><strong>HybridRAG + GraphRAG Retrieval</strong></h3><p>This is where the magic happens. Our advanced retrieval system goes far beyond basic semantic search:</p><ul><li><strong>Semantic Understanding</strong>: Comprehends the meaning and context of queries, not just keywords</li><li><strong>Relational Intelligence</strong>: Discovers connections between tickets, runbooks, and knowledge articles</li><li><strong>Superior Accuracy</strong>: Retrieves the most relevant solutions by combining multiple AI techniques</li></ul><p>Result? Your team gets the right answer the first time, dramatically reducing resolution time.</p><h3 id="llm-agnostic-architecture-your-choice-your-control"><strong>LLM-Agnostic Architecture: Your Choice, Your Control</strong></h3><p>Unlike solutions that lock you into a single AI provider, we support multiple deployment options:</p><p><strong>Cloud Options:</strong></p><ul><li><strong>OpenAI</strong>&#xA0;(GPT-4, GPT-4 Turbo)</li><li><strong>Microsoft Copilot</strong></li><li><strong>Azure OpenAI Service</strong></li></ul><p><strong>On-Premises Options:</strong></p><ul><li><strong>Open-weight models</strong>&#xA0;(Llama, Mistral, and more)</li><li><strong>Complete air-gapped deployment</strong></li><li><strong>Zero data leaving your network</strong></li></ul><p><strong>Switch anytime.</strong>&#xA0;Control costs. Meet data residency requirements. No vendor lock-in. Ever.</p><h3 id="truly-enterprise-ready-security"><strong>Truly Enterprise-Ready Security</strong></h3><p>Built from the ground up for enterprise security requirements</p><h2 id="any-deployment-model-your-infrastructure-your-rules">Any Deployment Model: Your Infrastructure, Your Rules</h2><p>Every organization has unique security, compliance, and infrastructure requirements. That&apos;s why we support&#xA0;<strong>any deployment model:</strong></p><p><strong><em>Cloud Deployment</em></strong></p><p><strong><em>Azure Deployment</em></strong></p><p><strong><em>On-Premises Deployment</em></strong></p><p><strong><em>Hybrid Deployment</em></strong></p><p><strong>Choose what works today. Change tomorrow if needs evolve. No penalties. No migration headaches.</strong></p><h2 id="the-user-experience-simple-yet-powerful">The User Experience: Simple Yet Powerful</h2><p><strong>One dashboard. Complete visibility. Total control.</strong></p><h2 id="system-requirements-flexible-accessible">System Requirements: Flexible &amp; Accessible</h2><p>The AI Ticket Support Assistant is designed to work with your existing infrastructure:</p><h3 id="ticket-system-compatibility"><strong>Ticket System Compatibility</strong></h3><p><strong>JIRA</strong>&#xA0;- All major deployments (Cloud, Data Center, Server)<br><strong>Multiple Systems</strong>&#xA0;- Integrate various ticket platforms simultaneously</p><h3 id="llm-requirements-choose-your-path"><strong>LLM Requirements</strong>&#xA0;(Choose Your Path)</h3><p><strong>Cloud Option:</strong></p><ul><li>OpenAI subscription (GPT-4, GPT-4 Turbo)</li><li>Microsoft Copilot subscription</li><li>Minimal infrastructure requirements</li></ul><p><strong>Azure Option:</strong></p><ul><li>Azure subscription with Azure OpenAI Service</li><li>Existing Azure infrastructure utilized</li><li>All processing within Azure tenant</li></ul><p><strong>On-Premises Option:</strong></p><ul><li>Hardware suitable for hosting local LLMs</li><li>Air-gapped deployment capability</li><li>Complete data sovereignty</li></ul><h3 id="security-authentication"><strong>Security &amp; Authentication</strong></h3><p>Enterprise-grade security requirements supported<br>Keycloak integration for identity management<br>Compatible with existing SSO and RBAC systems<br>Compliance-ready for regulated industries</p><h2 id="perfect-for-these-use-cases">Perfect For These Use Cases</h2><h3 id="it-operations-servicenow-teams"><strong>IT Operations &amp; ServiceNow Teams</strong></h3><p>Transform your incident management process. Instead of escalating tickets to senior engineers, Level 1 and Level 2 support can query the AI system to find similar past incidents and their exact resolutions.&#xA0;<strong>Reduce escalations by 60%</strong>while improving MTTR (Mean Time To Resolution).</p><h3 id="software-development-teams"><strong>Software Development Teams</strong></h3><p>Stop reinventing the wheel. When developers hit a bug or technical challenge, they can instantly access solutions from similar issues across all projects.&#xA0;<strong>Cut research time by 75%</strong>&#xA0;and accelerate feature delivery.</p><h3 id="devops-infrastructure"><strong>DevOps &amp; Infrastructure</strong></h3><p>Build an always-available expert system for operational procedures, troubleshooting guides, and infrastructure decisions. New ops team members can understand complex system architecture and common failure patterns in days instead of months.</p><h3 id="customer-support"><strong>Customer Support</strong></h3><p>Enable support teams to provide faster, more accurate responses by leveraging the collective knowledge of your IT operations and development teams.&#xA0;<strong>First-call resolution rates increase by 40%</strong>.</p><h3 id="compliance-audit-teams"><strong>Compliance &amp; Audit Teams</strong></h3><p>Maintain complete audit trails and ensure consistent responses to security incidents and compliance queries. Every AI interaction is logged and traceable.</p><h2 id="ready-to-transform-your-teams-productivity">Ready to Transform Your Team&apos;s Productivity?</h2><p>The JIRA AI Producer-Consumer System represents the next evolution in project management &#x2013; where artificial intelligence doesn&apos;t replace human expertise but amplifies it. Your team&apos;s collective knowledge becomes a powerful, searchable, and intelligent resource that grows stronger with every project.</p><p><strong>Stop letting valuable knowledge get lost in ticket graveyards. Start building your intelligent project ecosystem today.</strong></p><hr><h3 id="key-takeaways"><strong>Key Takeaways</strong></h3><p><strong>Transform existing JIRA data</strong>&#xA0;into intelligent, searchable knowledge<br><strong>Reduce research time by 75%</strong>&#xA0;with natural language queries<br><strong>Accelerate onboarding</strong>&#xA0;with AI-guided project exploration<br><strong>Preserve tribal knowledge</strong>&#xA0;across team transitions<br><strong>Enterprise-ready security</strong>&#xA0;with role-based access control<br><strong>Measurable ROI</strong>&#xA0;through improved productivity and reduced costs</p>]]></content:encoded></item><item><title><![CDATA[Turning a Frustrating Bitwarden Error into a Better Skyvern Feature]]></title><description><![CDATA[<p>While testing Skyvern with different integrations, our teammate Serena ran into some unexpected authentication issues when connecting it to Bitwarden. She not only solved the problem, but also improved Skyvern so others won&#x2019;t face the same roadblocks. Here&#x2019;s what she had to say.</p><p><strong>The Problem: Vague</strong></p>]]></description><link>https://blog.dvloper.io/turning-a-frustrating-bitwarden-error-into-a-better-skyvern-feature/</link><guid isPermaLink="false">68b698c1772af70001279e14</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Tue, 02 Sep 2025 07:35:22 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1729860646385-3e71fb29ff04?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGJpdHdhcmRlbnxlbnwwfHx8fDE3NTY3OTg1MTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1729860646385-3e71fb29ff04?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGJpdHdhcmRlbnxlbnwwfHx8fDE3NTY3OTg1MTZ8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Turning a Frustrating Bitwarden Error into a Better Skyvern Feature"><p>While testing Skyvern with different integrations, our teammate Serena ran into some unexpected authentication issues when connecting it to Bitwarden. She not only solved the problem, but also improved Skyvern so others won&#x2019;t face the same roadblocks. Here&#x2019;s what she had to say.</p><p><strong>The Problem: Vague Authentication Errors</strong></p><p>I kept getting authentication failures with no useful explanation. The output was short and gave no direction. After asking on the Skyvern Discord, I learned this was common. Bitwarden&apos;s CLI often produces vague errors that are hard to troubleshoot.</p><p>I eventually found the cause and fixed it. But the experience made me think about how Skyvern could make this easier for others.</p><p><strong>The Idea: More Helpful Error Messages in Skyvern</strong></p><p>The main issue was the lack of guidance. I added an optional extra field in error messages. This field can include:</p><ul><li>Hints that point to a possible solution</li><li>Extra context relevant to the specific error condition</li></ul><p>The system is general and can be expanded to include more conditions and hints over time.</p><p><strong>The First Use Case</strong></p><p>The first condition detects the exact Bitwarden CLI issue I faced. When triggered, it outputs the same fix that worked for me.</p><p><strong>Why This Matters</strong></p><ul><li>Reduces guesswork by giving practical guidance</li><li>Helps new users troubleshoot faster without needing deep system knowledge</li><li>Allows easy addition of new error-hint pairs in the future</li></ul><p><strong>Closing Thoughts</strong></p><p>Debugging is part of development, but vague errors slow everyone down. By adding small, targeted hints to error messages, we make the system easier to work with and reduce repeated troubleshooting. This improvement should help anyone integrating Skyvern with Bitwarden.</p>]]></content:encoded></item><item><title><![CDATA[INTRANET Launch: Unifying Tools, Streamlining Workflows]]></title><description><![CDATA[<p>We are proud to announce the release of our new internal application, now live in production. This platform has been designed to bring together essential day-to-day tools into a single, secure, and efficient environment. By centralizing these functions, we aim to reduce time spent switching between systems, improve visibility of</p>]]></description><link>https://blog.dvloper.io/intranet-launch-unifying-tools-streamlining-workflows/</link><guid isPermaLink="false">68ac02b5ad07ee00010826a2</guid><dc:creator><![CDATA[Bianca Brînzoi]]></dc:creator><pubDate>Mon, 25 Aug 2025 06:35:47 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1507925921958-8a62f3d1a50d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGFic3RyYWN0JTIwYnVzaW5lc3MlMjB3b3JrZmxvd3xlbnwwfHx8fDE3NTYxMDM3Mjh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1507925921958-8a62f3d1a50d?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDV8fGFic3RyYWN0JTIwYnVzaW5lc3MlMjB3b3JrZmxvd3xlbnwwfHx8fDE3NTYxMDM3Mjh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="INTRANET Launch: Unifying Tools, Streamlining Workflows"><p>We are proud to announce the release of our new internal application, now live in production. This platform has been designed to bring together essential day-to-day tools into a single, secure, and efficient environment. By centralizing these functions, we aim to reduce time spent switching between systems, improve visibility of information, and provide a consistent user experience for all employees.</p><p>The new product integrates several key operational areas into one cohesive solution. Employees can now manage leave requests, participate in performance check-ins, track goals, stay informed through company announcements, and access critical resources, all from a single interface. The result is not just a collection of features, but a fully connected system that supports everyday workflows and long-term productivity.</p><p><strong>A Modern, Scalable Technology Foundation</strong></p><p>The solution was developed using a technology stack carefully chosen for its ability to meet current needs while supporting future growth. <strong>Angula</strong>r powers the user interface, providing a responsive and intuitive experience across devices. On the backend, <strong>Node.js</strong> with <strong>Express</strong> and <strong>TSOA</strong> offers a robust and maintainable service layer, while <strong>Prisma ORM </strong>ensures reliable and efficient communication with the database.</p><p>Security and authentication are handled through <strong>Keycloak SSO</strong>, enabling employees to log in with their existing credentials while ensuring compliance with modern identity management standards. The product&#x2019;s components are containerized and orchestrated by <strong>Kubernetes</strong>, ensuring scalability and resilience. Continuous integration and delivery are managed through <strong>GitLab CI/CD</strong>, allowing for rapid, safe, and automated updates.</p><p>One of the key enhancements is the addition of the <strong>OKR</strong> (Objectives and Key Results) diagram, providing a visual and interactive way to track progress toward strategic and individual goals.</p><p><strong>A Closer Look at the Features</strong></p><p>At the heart of the application is the <strong>dashboard</strong>, which serves as a central hub for employees. From here, they can quickly navigate to timesheets, the company wiki, GitLab repositories, and other important resources. The layout prioritizes accessibility, ensuring that the most commonly used tools are only a click away.</p><p><strong>Leave management</strong> is one of the core modules, allowing employees to request annual, sick, or other types of leave, view their remaining balances, and review their leave history. The system also provides a shared calendar view, making it easy for managers and colleagues to see upcoming absences and plan accordingly.</p><p>The <strong>performance check-in</strong> feature supports regular self-assessments and feedback exchange. Employees can complete structured assessment forms, reflect on their progress, and view feedback received from others. This promotes transparency, professional growth, and a culture of open communication.</p><p>The <strong>user profile</strong> section gives employees the ability to search for colleagues and view their profiles, making it easier to connect, collaborate, and understand team structures. It also allows users to manage their own personal details, ensuring information is accurate and up to date.</p><p><strong>Ensuring Quality from Day One</strong></p><p>A rigorous quality assurance process was applied throughout the development cycle. Automated testing verified the stability of core components, while manual testing ensured that the interface was intuitive and functioned as expected. User acceptance testing was conducted with realistic scenarios to confirm that the product met operational requirements.</p><p>In addition, security and performance checks were carried out to safeguard sensitive information and confirm that the system performs reliably under load. This careful approach ensures that employees can depend on the application from the moment they begin using it.</p><p><strong>Accessing the Application</strong></p><p>All employees may access the application after authentication. A detailed user manual link is available on the dashboard, providing step-by-step guidance and explanations for each feature. This resource ensures that employees can quickly become familiar with the product and make full use of its capabilities.</p><p>This launch represents a significant step forward in modernizing our internal systems. By consolidating multiple tools into a single, secure platform, we have created an environment that will evolve alongside the company&#x2019;s needs. Feedback from employees will continue to shape future updates, ensuring that the system remains relevant, efficient, and valuable to our daily work.</p>]]></content:encoded></item><item><title><![CDATA[ZTCM - Zero Touch Configuration Management]]></title><description><![CDATA[<h2 id="highlights"><strong>Highlights</strong></h2><p>ZTCM is a modular platform that automates configuration management for distributed edge and IoT devices. The platform reduces manual configuration tasks, speeds up deployment processes, and provides consistent security policies across device networks. ZTCM helps organizations in retail, manufacturing, logistics, and infrastructure manage their distributed devices more efficiently.</p><h2 id="what-ztcm-does"><strong>What</strong></h2>]]></description><link>https://blog.dvloper.io/ztcm-zero-touch-configuration-management/</link><guid isPermaLink="false">6866372dc5aa1e0001fc2980</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Thu, 03 Jul 2025 07:55:38 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2025/07/unname.jpeg" medium="image"/><content:encoded><![CDATA[<h2 id="highlights"><strong>Highlights</strong></h2><img src="https://blog.dvloper.io/content/images/2025/07/unname.jpeg" alt="ZTCM - Zero Touch Configuration Management"><p>ZTCM is a modular platform that automates configuration management for distributed edge and IoT devices. The platform reduces manual configuration tasks, speeds up deployment processes, and provides consistent security policies across device networks. ZTCM helps organizations in retail, manufacturing, logistics, and infrastructure manage their distributed devices more efficiently.</p><h2 id="what-ztcm-does"><strong>What ZTCM Does</strong></h2><ul><li><strong>Automated device setup</strong> - Configure multiple devices without manual intervention</li><li><strong>Works with existing tools</strong> - Integrates with Keycloak, Ansible Tower, MongoDB, GitLab, and Teleport</li><li><strong>Reusable templates</strong> - Create configuration patterns once, use them repeatedly</li><li><strong>Security and tracking</strong> - Built-in access controls and detailed activity logs</li><li><strong>Flexible hosting</strong> - Deploy on cloud platforms, your own servers, or mixed environments</li><li><strong>Easy integration</strong> - Connect with existing IT management systems through APIs</li><li><strong>Real-time monitoring</strong> - Track device status and configuration compliance</li></ul><h2 id="1-what-is-zero-touch-configuration"><strong>1. What is Zero Touch Configuration?</strong></h2><h3 id="how-it-works"><strong>How It Works</strong></h3><p>Zero Touch Configuration means devices can be set up and managed without manual steps during deployment or ongoing operations. The system handles device discovery, security authentication, configuration deployment, and status monitoring automatically.</p><p>The approach requires three main components: secure device identification using digital certificates, template-driven configuration using standard automation tools, and two-way communication between the management platform and devices for status updates and remote control.</p><p>Configuration templates contain all the settings a device needs including network parameters, security policies, and operational rules. These templates can include variables that change based on location or device type while keeping everything else consistent.</p><h3 id="where-its-useful"><strong>Where It&apos;s Useful</strong></h3><p><strong>Manufacturing and Industrial Settings</strong> Factories deploy sensor networks that need identical configurations across production lines. Automated setup allows quick sensor replacement during maintenance without requiring specialized technicians on-site. Templates ensure all devices collect data the same way and follow the same security rules.</p><p><strong>Retail and Edge Computing</strong> Stores, clinics, and transportation hubs use distributed computers that need standard configurations adapted to local network conditions. Automated deployment reduces setup time from hours to minutes while maintaining consistent security across all locations.</p><p><strong>Network Equipment Management</strong> IT teams benefit from template-driven configuration that applies security policies, network settings, and service rules consistently. This approach reduces configuration mistakes and allows administrators to manage more devices with the same staff.</p><p><strong>Remote Monitoring Applications</strong> Environmental monitoring, fleet tracking, and asset management systems need device configurations that vary by location while maintaining centralized data standards. Automation enables scaling these deployments without proportional increases in support staff.</p><h2 id="2-how-ztcm-started"><strong>2. How ZTCM Started</strong></h2><p>Managing distributed devices manually creates significant challenges for IT teams. When organizations deploy hundreds or thousands of IoT sensors, edge computers, or network devices across multiple locations, configuration becomes a bottleneck that slows operations and introduces errors.</p><p>The ZTCM project was developed as an academy learning initiative to address three key problems: the time-consuming nature of configuring devices individually, the difficulty of maintaining consistent security settings across all devices, and the need for centralized control without requiring technical staff at every location.</p><p>Research showed that existing solutions were either limited to specific vendors or too complex for many organizations. This created an opportunity to develop a flexible, straightforward approach to automated device configuration that works across different hardware types and deployment scenarios as a comprehensive learning project.</p><p><strong>3. Building ZTCM as a Learning Project</strong></p><h3 id="project-goals-and-technical-learning"><strong>Project Goals and Technical Learning</strong></h3><p>The ZTCM development served as a comprehensive learning initiative covering distributed system design, integration with existing tools, and automated configuration management. The project tackled real technical challenges while building practical experience with modern infrastructure management.</p><p>Learning objectives included implementing service-based architecture, designing secure communication protocols, optimizing databases for different data types, and integrating with existing authentication systems. These areas provided hands-on experience with production system development practices.</p><h3 id="system-architecture"><strong>System Architecture</strong></h3><p>ZTCM uses a modular architecture that separates different functions like authentication, configuration management, task execution, and monitoring. This separation allows each part to scale independently based on demand while maintaining clear connections between components.</p><figure class="kg-card kg-image-card"><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXd_giikNfTCIJQtQuXfZicJRBjWzSsw6XHkY9QKMx4bPJf0ET_AGI9GJv0j61FKS4YtMDxoFbqgZ0at-xo3vrOhbNOiKmrWwKlNYz0j-_nJU0qS5BHFmnkUSsc17Nv3K-f_qUUGuQ?key=XmuP3hj7WpdH87TTobDqjQ" class="kg-image" alt="ZTCM - Zero Touch Configuration Management" loading="lazy" width="699" height="466"></figure><p></p><h3 id="how-the-system-works"><strong>How the System Works</strong></h3><p>Users access the platform through a web interface that sends configuration requests to a request handler for validation and routing to the control center. The control center works with validation modules including marketplace and device verification components, while task execution happens through Ansible Tower integration.</p><p>All activities are tracked and recorded, with data stored in MongoDB and user authentication handled by Keycloak. Configuration scripts and templates are version-controlled in GitLab, while secure communication with remote devices uses Teleport tunneling and local agent software.</p><h3 id="development-process-and-learning"><strong>Development Process and Learning</strong></h3><p>The learning experience followed an step-by-step approach combining theoretical knowledge with practical implementation. Each development phase addressed specific technical challenges while building understanding of distributed system architecture.</p><p>Development started with single-server prototypes to validate core concepts, then moved to distributed architecture that introduced service communication and data consistency challenges. Later phases focused on integration requirements and deployment considerations.</p><p>Technical skills developed included container management, API design, database optimization, security implementation, and monitoring system integration. These skills align with industry requirements for infrastructure management platforms.</p><h3 id="technical-challenges-and-solutions"><strong>Technical Challenges and Solutions</strong></h3><p>Database design provided significant learning opportunities because different devices need different configuration parameters. This led to flexible document-based storage solutions rather than rigid table-based databases.</p><p>Authentication system integration revealed complexities in working with existing identity providers. Implementation required understanding security protocols and certificate-based authentication to support different organizational requirements.</p><p>Service communication introduced challenges in service discovery, load distribution, and error handling. The learning process involved implementing health monitoring, failure protection patterns, and distributed logging for system visibility.</p><p>Configuration template management required understanding variable replacement, template validation, and version control integration. These concepts connected software development practices with infrastructure automation requirements.</p><p></p><h2 id="4-how-ztcm-can-be-used"><strong>4. How ZTCM Can Be Used</strong></h2><h3 id="standalone-platform"><strong>Standalone Platform</strong></h3><p>ZTCM works as an independent device management solution for organizations that need comprehensive configuration control over distributed devices. Standalone deployment provides complete lifecycle management including device registration, configuration deployment, monitoring, and compliance reporting.</p><p>Infrastructure requirements include dedicated computing resources for control components, network connectivity to managed devices, and certificate authority integration for device authentication. Storage needs scale with device count and configuration complexity.</p><p>The platform supports device populations from hundreds to thousands of units, with performance scaling through horizontal component expansion and database optimization. Deployment time ranges from minutes for single devices to hours for large-scale updates.</p><h3 id="integration-tool"><strong>Integration Tool</strong></h3><p>ZTCM connects with existing management platforms through standard APIs and notification interfaces. The platform can function as a specialized configuration component within larger IT management systems.</p><p><strong>Development Pipeline Integration</strong> Configuration templates integrate with continuous integration pipelines for automated testing and deployment. GitLab integration enables version control workflows while notifications trigger configuration updates based on code changes.</p><p><strong>IT Service Management</strong> The platform provides API endpoints for integration with IT service management platforms. Automated ticket creation captures configuration failures while status updates maintain visibility within existing management dashboards.</p><p><strong>Network Management Integration</strong> Monitoring capabilities enable integration with network management platforms for centralized device status reporting. Configuration compliance checks integrate with security policy systems for automated correction workflows.</p><h3 id="deployment-options"><strong>Deployment Options</strong></h3><p><strong>Cloud Deployment</strong> Container management enables flexible scaling based on operational demands. Cloud provider integration supports managed database services and certificate management. Multi-region deployment provides geographic distribution for global device management.</p><p><strong>Mixed Environment</strong> Cloud-based management console combines with on-premises execution components for organizations with data sovereignty requirements. Encrypted communication maintains security while enabling centralized management of distributed infrastructure.</p><p><strong>On-Premises Deployment</strong> Local deployment supports environments with restricted network connectivity. Local certificate authority integration maintains security while offline capabilities enable configuration management during network outages.</p><h2 id="5-conclusion"><strong>5. Conclusion</strong></h2><p>ZTCM demonstrates how academy projects can address real-world operational challenges in distributed infrastructure environments. This learning initiative showcased practical approaches to automated device configuration management while developing expertise in distributed systems design.</p><p>The development process provided valuable insights into distributed systems design, secure communication protocols, and configuration management automation. Technical decisions regarding database selection, service-based architecture, and integration proved effective for the target use cases and demonstrated the potential for production implementation.</p><p>Platform capabilities developed through this academy project include significant reduction in configuration deployment time, elimination of manual configuration errors, and improved security through consistent policy application. The project demonstrates how educational initiatives can produce solutions with real operational value.</p><p>Future development could include machine learning integration for predictive configuration management, expanded device support for emerging IoT platforms, and enhanced integration capabilities with cloud-native management platforms. The modular architecture foundation developed during the academy program supports these enhancements without requiring fundamental redesign.</p><p>The ZTCM academy project establishes a foundation for understanding large-scale automation challenges while demonstrating practical solutions for modern device infrastructure management requirements through hands-on learning and development.</p>]]></content:encoded></item><item><title><![CDATA[Breaking the Bottleneck: A Decentralised Grid for On-Demand Research Compute - Team Dvloper.io at ETHDam]]></title><description><![CDATA[<blockquote>&#x201C;What if your research task was urgent and horizontally scalable?&#x201D; &#x201C;What if you&#x2019;re new and don&#x2019;t want to invest in hardware yet?&#x201D; &#x201C;What if you could help others reach their computational goals&#x2014;and get rewarded for it?&#x201D;</blockquote><p>These three</p>]]></description><link>https://blog.dvloper.io/breaking-the-bottleneck/</link><guid isPermaLink="false">684ae32c56362600012b630d</guid><dc:creator><![CDATA[Dvloper Blog]]></dc:creator><pubDate>Mon, 16 Jun 2025 07:07:06 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2025/06/ETHDAM-25-FRIDAY-MAY-9TH-RANDOMS--@concretestate_photography-lo-res-46.jpg" medium="image"/><content:encoded><![CDATA[<blockquote>&#x201C;What if your research task was urgent and horizontally scalable?&#x201D; &#x201C;What if you&#x2019;re new and don&#x2019;t want to invest in hardware yet?&#x201D; &#x201C;What if you could help others reach their computational goals&#x2014;and get rewarded for it?&#x201D;</blockquote><img src="https://blog.dvloper.io/content/images/2025/06/ETHDAM-25-FRIDAY-MAY-9TH-RANDOMS--@concretestate_photography-lo-res-46.jpg" alt="Breaking the Bottleneck: A Decentralised Grid for On-Demand Research Compute - Team Dvloper.io at ETHDam"><p>These three questions sparked our weekend sprint at <strong>ETHDam</strong>. The result is <strong>ChainLabGrid</strong>: an open source ROFL app that lets anyone contribute or consume compute power, while Trusted Execution Environments (TEEs) keep every contribution secure and every result verifiable.</p><h3 id="framing-the-problem"><strong>Framing the problem</strong></h3><p>Research deadlines rarely wait for procurement cycles. If your model suddenly needs 10&#xD7; more horsepower, you either overpay a cloud provider or scramble for hardware you&#x2019;ll soon under-utilise. On the other side of the fence, countless CPUs and GPUs sit idle on laptops, workstations and edge boxes. <strong>ChainLabGrid</strong> bridges those two worlds: it turns spare capacity into a fluid, on-demand pool while guaranteeing data privacy and result integrity via Trusted Execution Environments (TEEs).</p><h3 id="what-we-actually-built-at-ethdam"><strong>What we actually built at ETHDam</strong></h3><p>During the 48-hour sprint we delivered a running MVP that was awarded a place in the top 10 applications of the hackathon that:</p><ul><li><strong>Accepts any compute job</strong> from a front-end App and records it on a Main Contract.</li><li><strong>Explodes the job into bite-sized subtasks</strong>&#x2014;each gets its own Sub-contract so hundreds of workers can chip away in parallel.</li><li><strong>Lets contributors discover and claim work</strong> with a single wallet click; no prior staking or hardware disclosure required.</li><li><strong>Runs every validation step inside TEEs</strong>, producing cryptographic attestations that the correct code path executed on untampered data.</li><li><strong>Aggregates and encrypts the final artefact</strong>, then releases automatic payments to all successful workers.</li></ul><h3 id="a-tour-of-the-flow"><strong>A tour of the flow</strong></h3><ol><li><strong>Task creation</strong> &#x2013; A user submits a compute request; the Main Contract stores metadata and notifies an off-chain Request Listener.</li><li><strong>Task expansion</strong> &#x2013; The listener verifies the payload, updates the active-task ledger and calls back to mint individual Sub-contracts.</li><li><strong>Discovery &amp; assignment</strong> &#x2013; Contributors pull a live list of open work, pick a subtask, and the contract assigns it atomically.</li><li><strong>Execution</strong> &#x2013; The contributor&#x2019;s node crunches numbers; capacity checks prevent over-commitment.</li><li><strong>Validation</strong> &#x2013; Sub-Task Validators inside TEEs replay the job deterministically; only approved outputs move forward.</li><li><strong>Aggregation &amp; payout</strong> &#x2013; Once every piece passes, a Task Aggregator stitches results, encrypts them for the requester, and the chain disburses rewards.</li></ol><h3 id="why-tees-instead-of-classic-crypto-proofs"><strong>Why TEEs instead of classic crypto-proofs?</strong></h3><ul><li><strong>Privacy first</strong> &#x2013; sensitive research data never leaves enclave memory.</li><li><strong>Lightweight trust</strong> &#x2013; we sidestep heavy zero-knowledge proofs; the hardware attestation is both cheaper and faster to verify.</li><li><strong>Interoperability</strong> &#x2013; by building on Oasis Sapphire (EVM-compatible), we reuse the Solidity ecosystem while inheriting enclave guarantees.</li></ul><h3 id="the-impact-we%E2%80%99re-chasing"><strong>The impact we&#x2019;re chasing</strong></h3><p><strong>ChainLabGrid</strong> turns compute into a liquid commodity: researchers rent milliseconds instead of machines, newcomers monetise idle rigs with two clicks, and enterprises gain a privacy-preserving overflow buffer for bursty workloads. All secured by the same enclave tech that protects mission-critical fintech and healthcare systems.</p><figure class="kg-card kg-image-card"><img src="https://blog.dvloper.io/content/images/2025/06/ETHDAM-25-SUNDAY-11ND-RANDOMS-@concretestate_photography-lo-res-45--2--1.jpg" class="kg-image" alt="Breaking the Bottleneck: A Decentralised Grid for On-Demand Research Compute - Team Dvloper.io at ETHDam" loading="lazy" width="2000" height="1334" srcset="https://blog.dvloper.io/content/images/size/w600/2025/06/ETHDAM-25-SUNDAY-11ND-RANDOMS-@concretestate_photography-lo-res-45--2--1.jpg 600w, https://blog.dvloper.io/content/images/size/w1000/2025/06/ETHDAM-25-SUNDAY-11ND-RANDOMS-@concretestate_photography-lo-res-45--2--1.jpg 1000w, https://blog.dvloper.io/content/images/size/w1600/2025/06/ETHDAM-25-SUNDAY-11ND-RANDOMS-@concretestate_photography-lo-res-45--2--1.jpg 1600w, https://blog.dvloper.io/content/images/2025/06/ETHDAM-25-SUNDAY-11ND-RANDOMS-@concretestate_photography-lo-res-45--2--1.jpg 2400w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[Timesheet App – From Learning Tool to a Centralized Time Tracking Powerhouse]]></title><description><![CDATA[<p>What began as a hands-on learning experience has now grown into a reliable, daily-use internal product: the <strong>Timesheet App</strong>. Built entirely in-house, the app is now our centralized system for tracking time, managing project involvement, and generating reports that support HR, Project Managers, and Admin teams.</p><h2 id="why-we-built-the-timesheet-app"><strong>Why We Built the</strong></h2>]]></description><link>https://blog.dvloper.io/timesheet-app-from-learning-tool-to-a-centralized-time-tracking-powerhouse/</link><guid isPermaLink="false">6818d916f169ce000115a63e</guid><dc:creator><![CDATA[Bianca Brînzoi]]></dc:creator><pubDate>Mon, 05 May 2025 15:40:20 GMT</pubDate><media:content url="https://blog.dvloper.io/content/images/2025/05/ChatGPT-Image-May-5--2025--06_46_01-PM--1-.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.dvloper.io/content/images/2025/05/ChatGPT-Image-May-5--2025--06_46_01-PM--1-.png" alt="Timesheet App &#x2013; From Learning Tool to a Centralized Time Tracking Powerhouse"><p>What began as a hands-on learning experience has now grown into a reliable, daily-use internal product: the <strong>Timesheet App</strong>. Built entirely in-house, the app is now our centralized system for tracking time, managing project involvement, and generating reports that support HR, Project Managers, and Admin teams.</p><h2 id="why-we-built-the-timesheet-app"><strong>Why We Built the Timesheet App?</strong></h2><p>In every organization, time is one of the most valuable resources. We needed a way to clearly track and report how it&#x2019;s spent&#x2014;per person, per project. Off-the-shelf tools didn&#x2019;t give us the flexibility or insight we were after, so we created a custom solution that could.</p><p>The goal was to build something that:</p><ul><li>Allows employees to easily clock hours on assigned projects</li><li>Gives PMs visibility into team assignments and project timelines<br></li><li>Enables HR to pull clean, exportable reports on time spent<br></li><li>Ensures secure, role-based access to all data</li></ul><h2 id="a-closer-look-at-the-app"><strong>A Closer Look at the App</strong></h2><p>At the heart of the Timesheet App is the <strong>Clocking tab</strong>, where employees record the time they&#x2019;ve spent on their assigned projects. It&#x2019;s intentionally minimalistic&#x2014;quick to use and built to encourage consistency without getting in the way of the work itself.</p><p>The <strong>Projects tab</strong> gives everyone visibility into the current state of projects: when they start and end, who&#x2019;s involved, and who&#x2019;s managing them. This helps keep teams aligned and avoids confusion about responsibilities or timelines.</p><p>In the <strong>Users tab</strong>, admins can manage user roles and access levels. It provides a clear breakdown of who is working on what and helps ensure that people only see the data relevant to their role.</p><p>Finally, the <strong>Reporting page</strong> is where things come together. From here, users can generate detailed CSV reports filtered by date range, project, or user. These reports are used regularly by HR for audits and time analysis, and by project leads for better planning.</p><h3 id="engineering-behind-the-scenes"><strong>Engineering Behind the Scenes</strong></h3><p>We dedicated time to developing a robust backend to ensure optimal app performance and security. We also built a strong technical foundation to support the app&apos;s functionality and long-term performance.</p><ul><li><strong>Authentication and Authorization: </strong>Integrated Keycloak for secure, centralized user management, including SSO, identity brokering, and fine-grained access control.</li><li><strong>Data Synchronization: </strong>Keycloak user data is automatically synced to the Timesheet App database to maintain consistency and simplify role assignment.<br></li><li><strong>State Management: </strong>Used NgRx Store to manage application state. As the app evolved, this centralized approach helped simplify data flow and improve responsiveness.<br></li><li><strong>Logging and Monitoring: </strong>A custom logging module gives us better insight into traffic and backend activity, helping us monitor usage patterns and quickly identify issues.<br></li><li><strong>Admin Dashboard: </strong>Built for administrators to oversee data, troubleshoot, and manage system-level settings.</li></ul><h2 id="quality-assurance-built-in-from-the-start"><strong>Quality Assurance: Built-In from the Start</strong></h2><p>QA wasn&#x2019;t an afterthought&#x2014;it was embedded into the development process from the very beginning. Delivering a reliable experience was non-negotiable, especially for an internal tool that&apos;s used daily.&#xA0;</p><p>Quality Assurance was integrated early into the development lifecycle, with a strong emphasis on test coverage and real-world use case validation, based on the following main focus-points:</p><ul><li><strong>Comprehensive Test Case Design</strong>: Based on user stories, we created detailed QA test cases covering all primary and edge-case interactions.</li><li><strong>Post-MVP Bug Strategy</strong>: A structured bug identification and resolution plan was created after the MVP release, helping prioritize fixes and enhancements based on user impact.</li><li><strong>Automated Testing Suite</strong>: We implemented full automated testing across critical workflows. These tests ran consistently to validate new updates, catch regressions, and maintain app stability.</li><li><strong>Continuous Improvement</strong>: Feedback loops between QA, developers, and stakeholders ensured quick iterations, faster delivery, and consistent quality.</li></ul><p>The end result? A smooth, high-performance tool that passed all automated checks and met our internal reliability standards before going into full production.</p><h2 id="real-world-results-the-impact"><strong>Real-World Results: The Impact</strong></h2><p>Since launching, the <strong>Timesheet App</strong> has become an essential part of our internal workflow. It&#x2019;s used daily across departments and has become a reliable source of truth for time tracking and reporting. HR uses it for audits and payroll prep. Project managers use it to monitor progress and allocate resources. Employees use it to track their time with ease.</p><p>What began as a small internal experiment has grown into a production-ready tool that solves real problems. And thanks to its solid technical foundation, we&#x2019;re well-positioned to keep improving it over time.</p>]]></content:encoded></item></channel></rss>