<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Yash's Blog]]></title><description><![CDATA[Welcome to my blog page, a place where I write about tech, and tutorials as I learn them. My goal is to share whatever I learned the 'hard way' in the simplest possible way.]]></description><link>https://blogs.yasharyan.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 15:18:03 GMT</lastBuildDate><atom:link href="https://blogs.yasharyan.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Basics of the Operating System]]></title><description><![CDATA[I have been reading this book called ‘Operating System Concepts’ by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne for some time now, and I thought, let’s share the knowledge I am gaining with my fellow readers. SO here goes everything that I...]]></description><link>https://blogs.yasharyan.dev/basics-of-the-operating-system</link><guid isPermaLink="true">https://blogs.yasharyan.dev/basics-of-the-operating-system</guid><category><![CDATA[operating system]]></category><category><![CDATA[Architecture Design]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Thu, 15 Jan 2026 17:13:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768496955043/1530b570-eece-4c61-89f3-854dd7392440.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have been reading this book called ‘Operating System Concepts’ by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne for some time now, and I thought, let’s share the knowledge I am gaining with my fellow readers. SO here goes everything that I have been learning from the book about Operating systems broken down into chapters. This is the first chapter ands contains the basics.</p>
<p><img src="https://cdn-images-1.medium.com/max/1600/1*ZkmlWqqIQq_E5O0QnBV-yQ.png" alt /></p>
<h4 id="heading-what-does-an-operating-system-do">What does an Operating System do?</h4>
<p>At a high level, the operating system (OS) exists for one reason: to make sure a computer’s resources are used properly. Not ‘efficiently’ in some to be exact, but <em>sanely</em>. Without it, your CPU, memory, and devices would be a chaotic free-for-all. Think less ‘well organised city’ and more like ‘toddlers fighting over toys’ where each one yanks the resources for its own benefit without caring about the collective commotion they are creating.</p>
<h4 id="heading-the-pieces-of-a-computer-system">The pieces of a computer system</h4>
<p>A computer isn’t just a shiny new expensive piece of hardware sitting on your desk looking expensive. It is a stack, and every layer depends on the one below it.</p>
<ul>
<li><p><strong>Hardware</strong>: CPU, memory, storage, I/O devices — the physical stuff.</p>
</li>
<li><p><strong>Operating System</strong>: The boss. It controls the hardware.</p>
</li>
<li><p><strong>Application Programs</strong>: Browsers, compilers, editors — these <em>use</em> what the OS provides.</p>
</li>
<li><p><strong>User</strong>: That’s you, clicking buttons and expecting miracles.</p>
</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*c17EVzhMn3KeN94_" alt class="image--center mx-auto" /></p>
<p>The OS sits in the middle, acting as a translator and referee. Applications don’t talk directly to hardware. That would be like letting every citizen directly control traffic lights. Instead, they go through the OS, which enforces rules and keeps things from catching fire.</p>
<h4 id="heading-user-view-vs-system-view-same-machine-different-reality">User View vs. System View: Same Machine, Different Reality</h4>
<p>From the user’s point of view, the computer is about convenience and speed. You want the apps to load fast, animations to feel smooth, and the system to ‘just work.’</p>
<p>From the system’s point of view, the OS has much colder job. It is a resource allocator. CPU time, memory space, disk access, I/O devices, everything is limited, and everything wants attention now. When requests conflict, the OS decides who waits and who gets served.</p>
<p>Same machine, but completely different priorities.</p>
<p>This distinction matters even more when you are away from laptops and smartphones. Embedded systems like those in refrigerators or air conditioners barely have a ‘user view’ at all. In these cases, the OS exists almost entirely to manage resources quietly and reliably in the background.</p>
<h3 id="heading-the-core-roles-of-an-operating-system">The core roles of an Operating System</h3>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*v709vr4Pc9WLCb6A" alt class="image--center mx-auto" /></p>
<p>At its heart, the OS wears two main hats.</p>
<p>First, it’s a <strong>resource manager</strong>. A good analogy is a government. It doesn’t produce useful work itself, but it creates the conditions that allow others to function. CPU scheduling, memory allocation, device management. This is the OS deciding who gets what, when and for how long.</p>
<p>Second, it’s a <strong>control program</strong>. This is the safety officer. The OS manages how user programs execute, preventing errors, misuse, and chaos, especially when it comes to I/O devices. You don’t want random applications poking hardware directly.</p>
<h3 id="heading-what-is-an-operating-system-made-of">What is an Operating System made of?</h3>
<p><img src="https://cdn-images-1.medium.com/max/1600/1*SLuX0rxV2uHl3o36Xu_ffQ.png" alt class="image--center mx-auto" /></p>
<p>The OS isn’t a monolithic blob. It is a layered composition that lets modern systems scale and evolve without collapsing under their own complexity.</p>
<ol>
<li><p>**The Kernel<br /> **It is the privileged core of the OS. It runs in supervisor mode (aka Kernel mode), which means it can execute any CPU instruction and directly access hardware. It is responsible for:<br /> i. <strong><em>Process management</em></strong> — Creating, scheduling and terminating processes and threads.<br /> ii. <strong><em>Device management</em></strong> — Abstracting hardware via device drivers and handling interrupts<br /> iii. <strong><em>System call handling</em></strong>—Providing a controlled entry point for user-space programs to request privileged operations.</p>
</li>
<li><p>**System Programs<br /> **It runs in user mode, not kernel mode, but they are tightly coupled to the OS. They provide essential services that make the system usable, but don’t require direct hardware control. Examples include Init systems (like systemd) that manage service startup, file system utilities, network configuration tools, logging daemons, shells and core command line tools.</p>
</li>
<li><p>**Application Programs<br /> **Application programs are fully user-space software with no special privileges. They cannot access hardware, memory outside their address space, or other processes directly. Key charactersistics include:<br /> i. Run in isolated virtual address spaces.<br /> ii. Rely on OS abstractions like files, sockets, and processes<br /> iii. Use libraries and system calls to interact with the OS</p>
</li>
<li><p>**Middlewares<br /> **These exist to reduce friction between application and the OS, especially in complex ecosystems like mobile and distributed systems. They often run in user space but may communicate with kernel services or system daemons underneath. They provide:<br /> i. High-level APIs over low-level system calls<br /> ii. Shared services such as databases, media codecs, graphics pipelines and messaging<br /> iii. Runtime environments</p>
<p> On mobile systems, middlewares are essential because apps are sandboxed aggressively. The OS exposes limited priveleges, and middleware fills the gap with reusable, standardized services so every app doesn't reinvent the same machinery.</p>
</li>
</ol>
<h4 id="heading-why-this-layering-matters"><strong>Why this layering matters?</strong></h4>
<p>This layered architecture exists to manage complexity, safety and performance.</p>
<ul>
<li><p>It also maintains privilege separation to prevent bugs in application layer from corrupting the system.</p>
</li>
<li><p>Abstraction layers allow hardware and software to evolve independently.</p>
</li>
<li><p>Modularity makes operating system maintainable at scale.</p>
</li>
</ul>
<h3 id="heading-the-truth">The Truth</h3>
<p>An operating system does not compose emails, write code, or stream twitch. It creates order. Every time when your device does not crash when you have 20 chrome tabs open, every time you can smoothly take a call while achieving a 20x multiplier on Call of Duty Mobile, its the OS doing its job.</p>
<h3 id="heading-bonus">Bonus</h3>
<p>Have you heard of the Moore’s law? It is an observation by Intel co-founder Gordon Moore in the year 1965.</p>
<p><img src="https://cdn-images-1.medium.com/max/1600/0*6ER-6bsQDSS2vk9H.jpg" alt="https://newsroom.intel.com/press-kit/moores-law" class="image--center mx-auto" /></p>
<p>Well, historically, operating systems have been shaped by hardware progress. Moore’s law is an observation that transistors counts on a microchip double roughly every 18 months, leading to exponential growth in computing power, smaller devices, and lower costs, though it’s an empirical trend, not a physical law, and faces physical limits, driving innovation in new materials and packaging to sustain progress.</p>
<p>It has been pretty accurate up till now.</p>
]]></content:encoded></item><item><title><![CDATA[Java Is Not So Complicated]]></title><description><![CDATA[If you’ve written code in Python, JS, C++, or even Go, Java’s data structures will feel familiar, but a little stricter, a little louder, and a lot more organized. Imagine the thrill and control you felt when you switched from JavaScript to TypeScrip...]]></description><link>https://blogs.yasharyan.dev/java-is-not-so-complicated</link><guid isPermaLink="true">https://blogs.yasharyan.dev/java-is-not-so-complicated</guid><category><![CDATA[Java]]></category><category><![CDATA[java beginner]]></category><category><![CDATA[data structures]]></category><category><![CDATA[DSA]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Fri, 14 Nov 2025 18:44:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763059599679/e3027b31-b68c-4998-94b1-d6618361a955.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’ve written code in Python, JS, C++, or even Go, Java’s data structures will feel familiar, but a little stricter, a little louder, and a lot more organized. Imagine the thrill and control you felt when you switched from JavaScript to TypeScript. Now imagine the same, but a nothing like it. There will be so much control on your hand that you will get overwhelmed with power.</p>
<p>Java has a whole <em>Collections Framework</em> dedicated to the art of storing and manipulating data. It’s one of the language’s biggest strengths, and also one of the first things beginners dread.</p>
<p>Let’s break it down clearly, with real-world use cases and code you’ll actually use in production.</p>
<h2 id="heading-collections-framework-the-big-umbrella"><strong>Collections Framework: The Big Umbrella</strong></h2>
<p>Java bundles its data structures into one giant library called the <strong>Collections Framework</strong>. It provides a set of interfaces, classes, and algorithms to store, retrieve, and manipulate groups of objects in a standardized way. Everything meaningful flows from a few core interfaces - List, Set, Queue, and the outlier Map. This can be thought to be Java’s built-in “starter pack” for organizing data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763116851479/9e1cf846-68c9-4a6d-b55f-d34f06a5dcdf.jpeg" alt class="image--center mx-auto" /></p>
<p>You might be wondering why Maps are not a sub-tree of the Collection tree? Well, Map is part of the Collections Framework, but it is <strong>not a subtype of the Collection</strong> interface. Yeah, Java did the slightly awkward thing on purpose.</p>
<p>But why?<br />It is a mapping from a key to a value. Basically a function. Making a map extend Collection breaks fundamental rules of:</p>
<ul>
<li><p>set theory</p>
</li>
<li><p>type safety</p>
</li>
<li><p>logical meaning of operations like add, remove, contains, size, etc.</p>
</li>
</ul>
<p>Take for example <code>.add()</code> on a List and Set. Makes sense right? You are adding more strings to a List of strings. But on a Map? What do you add? A key? A value? A key-value pair? Definition is ambiguous.</p>
<h2 id="heading-1-sets-when-you-need-uniqueness"><strong>1. Sets → When You Need Uniqueness</strong></h2>
<p>A Set is a collection that stores a group of unique elements with <strong>no duplicates</strong>. That’s it. But that one constraint makes Sets extremely useful.</p>
<h3 id="heading-popular-implementations"><strong>Popular Implementations</strong></h3>
<ul>
<li><p><strong>HashSet</strong>: Fastest, no order maintained</p>
</li>
<li><p><strong>LinkedHashSet</strong>: Preserves insertion order</p>
</li>
<li><p><strong>TreeSet</strong>: Sorted, backed by Red-Black Tree</p>
</li>
</ul>
<pre><code class="lang-java">Set&lt;String&gt; users = <span class="hljs-keyword">new</span> HashSet&lt;&gt;();
users.add(<span class="hljs-string">"Pablo"</span>);
users.add(<span class="hljs-string">"Pablo"</span>);  <span class="hljs-comment">// ignored becuase it already exists</span>
users.add(<span class="hljs-string">"Escobar"</span>);

System.out.println(users);  <span class="hljs-comment">// [Pablo, Escobar] not necessarily in the same order</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763144422103/7f4686ec-5fbd-46de-bcb8-4816772b1c75.png" alt class="image--center mx-auto" /></p>
<p><em>Imagine a Java Set as an apartment floor. Each floor is unique. There cannot be two 4th Floors. There is only one floor of a number.</em></p>
<h3 id="heading-industry-use-cases"><strong>Industry Use Cases</strong></h3>
<ul>
<li><p>De-duplicating records during ETL (Extract, Transform and Load)</p>
</li>
<li><p>Tracking visited nodes in graph problems</p>
</li>
<li><p>Caching unique API calls or events</p>
</li>
</ul>
<h2 id="heading-2-lists-ordered-indexed-repeatable"><strong>2. Lists → Ordered, Indexed, Repeatable</strong></h2>
<p>Lists in Java are ordered collections that allow duplicate elements and provide indexed access to elements. This is the workhorse of Java development.</p>
<h3 id="heading-popular-implementations-1"><strong>Popular Implementations</strong></h3>
<ul>
<li><p><strong>ArrayList</strong>: dynamic array, fastest for reads</p>
</li>
<li><p><strong>LinkedList</strong>: optimized for insert-heavy workflows</p>
</li>
</ul>
<pre><code class="lang-java">List&lt;String&gt; products = <span class="hljs-keyword">new</span> ArrayList&lt;&gt;();
products.add(<span class="hljs-string">"iPhone"</span>);
products.add(<span class="hljs-string">"MacBook"</span>);
products.add(<span class="hljs-string">"MacBook"</span>); <span class="hljs-comment">// duplicates allowed</span>

System.out.println(products); <span class="hljs-comment">// [iPhone, MacBook, MacBook] in this exact order</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763144248687/dba531c4-e67c-479c-9aae-7f8fee37cee7.png" alt class="image--center mx-auto" /></p>
<p><em>Imagine a train as a Java List. Each bogie is not unique, but they still are in an ordered collection. The positioning matters.</em></p>
<h3 id="heading-industry-use-cases-1"><strong>Industry Use Cases</strong></h3>
<ul>
<li><p>Response mapping from APIs</p>
</li>
<li><p>Keeping ordered logs</p>
</li>
<li><p>Storing items in shopping carts, feeds, playlists</p>
</li>
</ul>
<h2 id="heading-3-queues-first-in-first-out-fifo-priority-based"><strong>3. Queues → First In, First Out (FIFO, priority-based)</strong></h2>
<p>A Queue is a linear data structure following the First In First Out (FIFO) principle, where elements are added (enqueued) at the rear and removed (dequeued) from the front.</p>
<h3 id="heading-implementations-you-actually-use"><strong>Implementations You Actually Use</strong></h3>
<ul>
<li><p><strong>ArrayDeque</strong>: The modern, efficient queue</p>
</li>
<li><p><strong>LinkedList as Queue</strong>: Works but not preferred</p>
</li>
<li><p><strong>PriorityQueue</strong>: Automatically sorts based on priority</p>
</li>
</ul>
<pre><code class="lang-java">Queue&lt;String&gt; tasks = <span class="hljs-keyword">new</span> ArrayDeque&lt;&gt;();
tasks.add(<span class="hljs-string">"processPayment"</span>);
tasks.add(<span class="hljs-string">"sendEmail"</span>);
tasks.add(<span class="hljs-string">"generateInvoice"</span>);

System.out.println(tasks.poll()); <span class="hljs-comment">// processPayment</span>
</code></pre>
<p><img src="https://images.unsplash.com/photo-1663786056091-fcb790594901?fm=jpg&amp;q=60&amp;w=3000&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" alt="a group of people standing outside a building" /></p>
<p><em>People standing in a queue can be the perfect example of a Java Queue. First person in the line is the first person to get his ice-cream and leave.</em></p>
<h3 id="heading-industry-use-cases-2"><strong>Industry Use Cases</strong></h3>
<ul>
<li><p>Task execution pipelines</p>
</li>
<li><p>Messaging queues</p>
</li>
<li><p>Throttling and scheduling</p>
</li>
</ul>
<h2 id="heading-4-maps-keyvalue-powerhouse"><strong>4. Maps → Key–Value Powerhouse</strong></h2>
<p>A Map is an object that maps keys to values, where each key is unique. It stores key-value pairs, allowing efficient retrieval, insertion, and deletion based on keys.</p>
<h3 id="heading-common-implementations"><strong>Common Implementations</strong></h3>
<ul>
<li><p>HashMap → Fastest, default choice</p>
</li>
<li><p>LinkedHashMap → Preserves insertion order</p>
</li>
<li><p>TreeMap → Sorted order by keys</p>
</li>
</ul>
<pre><code class="lang-java">Map&lt;String, Integer&gt; stock = <span class="hljs-keyword">new</span> HashMap&lt;&gt;();
stock.put(<span class="hljs-string">"apple"</span>, <span class="hljs-number">50</span>);
stock.put(<span class="hljs-string">"banana"</span>, <span class="hljs-number">20</span>);

System.out.println(stock.get(<span class="hljs-string">"apple"</span>)); <span class="hljs-comment">// 50</span>
</code></pre>
<p><img src="https://images.unsplash.com/photo-1524639064490-254e0a1db723?fm=jpg&amp;q=60&amp;w=3000&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" alt="opened book on brown table" /></p>
<p><em>Imagine this dictionary to represent a Java map. There are unique words, or keys, in the entire dictionary that each have a definition, or values. There cannot be a duplicate word in the dictionary, however different words can have the same meaning.</em></p>
<h3 id="heading-industry-use-cases-3"><strong>Industry Use Cases</strong></h3>
<ul>
<li><p>JSON-like structures</p>
</li>
<li><p>Configurations</p>
</li>
<li><p>Database row mapping</p>
</li>
<li><p>Caching layers</p>
</li>
</ul>
<h2 id="heading-5-iterators-the-universal-cursor"><strong>5. Iterators → The Universal Cursor</strong></h2>
<p>Iterators let you move through any collection safely and generically. They are used to traverse or iterate through elements of a collection, one element at a time.</p>
<pre><code class="lang-java">Iterator&lt;String&gt; it = products.iterator();
<span class="hljs-keyword">while</span> (it.hasNext()) {
    System.out.println(it.next());
}
</code></pre>
<p><img src="https://images.unsplash.com/photo-1709158997589-c547225f86f3?fm=jpg&amp;q=60&amp;w=3000&amp;ixlib=rb-4.1.0&amp;ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" alt="a close up of a book on a table" /></p>
<p><em>Imagine Iterators to be a bookmark in a book. It represents a pointer that traverses elements sequentially without exposing the entire book.</em></p>
<h3 id="heading-industry-use-cases-4"><strong>Industry Use Cases</strong></h3>
<ul>
<li><p>Removing elements while iterating (safe deletion)</p>
</li>
<li><p>Generic data processing pipelines</p>
</li>
<li><p>Framework-level operations (Spring does this internally)</p>
</li>
</ul>
<h2 id="heading-6-enhanced-for-loop-cleaner-simpler-iteration"><strong>6. Enhanced For-Loop → Cleaner, Simpler Iteration</strong></h2>
<p>A more readable version of the iterator collection introduced with Java 5E. If you are familiar with JavaScript, it would translate to <code>for (product in products) {}</code> code. This is the default pattern used across modern Java codebases.</p>
<pre><code class="lang-java"><span class="hljs-keyword">for</span> (String p : products) {
    System.out.println(p);
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763145068532/ba671a65-6775-413d-ba2f-b69ffd0dac7b.png" alt class="image--center mx-auto" /></p>
<p><em>Think of this as conveyor belt with items moving past a person. It allows automatic access to each item in a collection without manual indexing.</em></p>
<h2 id="heading-7-foreach"><strong>7. forEach()</strong></h2>
<p>Java 8 introduced a functional-style way of iterating. Clean. Expressive. Modern. It was introduced to eliminate boilerplate code of iterators or index-based loops.</p>
<pre><code class="lang-java">products.forEach(p -&gt; System.out.println(p));
</code></pre>
<p><img src="https://img.freepik.com/free-photo/car-bodies-are-assembly-line-factory-production-cars-modern-automotive-industry-car-being-checked-before-being-painted-hightech-enterprise_645730-809.jpg?semt=ais_hybrid&amp;w=740&amp;q=80" alt="Car assembly line Images - Free Download on Freepik" /></p>
<p><em>Consider an assembly line in a car production. Each worker performs the same actions (Lambdas) on items on the assembly line as another batch is passed through them.</em></p>
<h2 id="heading-8-lambdas-making-java-less-verbose"><strong>8. Lambdas → Making Java Less Verbose</strong></h2>
<p>Lambdas are compact functions that make iteration, filtering, and transformations painless. It is a concise way to represent an anonymous function. If you code in Java in 2025, you use lambdas. Period. Example with filtering:</p>
<pre><code class="lang-java">products.stream()
        .filter(p -&gt; p.startsWith(<span class="hljs-string">"M"</span>))
        .forEach(System.out::println);
</code></pre>
<p><img src="https://media.hswstatic.com/eyJidWNrZXQiOiJjb250ZW50Lmhzd3N0YXRpYy5jb20iLCJrZXkiOiJnaWZcL3JvYm90aWMtdmFjdXVtLmpwZyIsImVkaXRzIjp7InJlc2l6ZSI6eyJ3aWR0aCI6ODI4fX19" alt="How Robotic Vacuums Work | HowStuffWorks" /></p>
<p><em>This can be considered a mini robot with programmable instructions. It represents anonymous reusable functions that can be passed around flexibly.</em></p>
<h3 id="heading-industry-use-cases-5"><strong>Industry Use Cases</strong></h3>
<ul>
<li><p>Spring Boot controllers</p>
</li>
<li><p>Streams API</p>
</li>
<li><p>Android apps</p>
</li>
<li><p>Reactive programming</p>
</li>
</ul>
<h1 id="heading-final-thoughts"><strong>Final Thoughts</strong></h1>
<p>Java’s data structures look intimidating at first, but once you understand the Collections Framework, everything snaps into place. Whether you’re a fresher learning your first big language or a developer jumping over from Python/JS, these structures will be part of every backend service or Android app you code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763143630306/331e3dd4-3763-4b7c-8dea-cf7da2c486e3.jpeg" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Why is AI Leaving the Cloud and Moving into your Pocket]]></title><description><![CDATA[Introduction
Your digital devices have never been so powerful as today. Take, for instance, Apple’s silicon, like the M5 and its predecessors. These chips offer performance that rivals some of the most powerful processors from Intel and AMD.

The M1 ...]]></description><link>https://blogs.yasharyan.dev/ai-on-device</link><guid isPermaLink="true">https://blogs.yasharyan.dev/ai-on-device</guid><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Apple]]></category><category><![CDATA[technology]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Google]]></category><category><![CDATA[on-device ai]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Tue, 28 Oct 2025 19:37:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761672664269/4c02a6b2-6fc8-49a1-8f65-8bb86f0b2a28.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Your digital devices have never been so powerful as today. Take, for instance, Apple’s silicon, like the M5 and its predecessors. These chips offer performance that rivals some of the most powerful processors from Intel and AMD.</p>
<p><img src="https://cdn.mos.cms.futurecdn.net/Fudiia9dAiNt5cGSDLMqKm.jpg" alt="Google Pixel 6 Tensor chip — what it is and why it's a big deal | Tom's  Guide" /></p>
<p>The M1 chip, introduced in 2020 with the Mac, could easily run LLaMA 3–3B, Mistral 1.3B, or Stable Diffusion 1.5 models smoothly without making your processor go crazy. The A15 Bionic chip that came with iPhone 13 in 2021 was capable enough to run models upto 2 billion parameters comfortably. The Tensor G3 was introduced with Pixel 8 in 2023 was made to run Gemini nano on device. Similarly, the Microsoft SQ3 launched with the Surface Pro 9 in 2022, is geared for on-device AI tasks.</p>
<p>The growing power of these devices lay the groundwork for understanding why on-device AI processing is becoming increasingly prevalent. As these processors become more powerful and efficient, they enable new capabilities within the constraints of modern devices, making it feasible to run complex AI models directly on the device rather than relying on cloud resources.</p>
<h2 id="heading-the-rise-of-on-device-ai">The Rise of On-Device AI</h2>
<p>The rise of on-device AI processing is driven by a convergence of technological advancements and market pressures. Technologically, the increased computational power of modern devices has rendered local data processing feasible for increasingly complex tasks. This shift in capability empowers developers to offload compute-intensive tasks directly to the user’s device, thereby reducing reliance on cloud resources.</p>
<p>Market pressure has also played a significant role. User privacy concerns have surged due to high-profile breaches and regulatory changes like GDPR and CCPA. Companies are now under more scrutiny to protect user data and demonstrate compliance with privacy regulations. By having LLMs process data on-device, your data is processed locally, and companies can mitigate risks associated with data exposure. This helps them adhere to regulations and increase user privacy.</p>
<p><img src="https://wallpaperswide.com/download/ai_processor_hardware_artificial_intelligence_circuits_evolution_technology-wallpaper-3840x2160.jpg" alt="AI Processor, Hardware, Artificial Intelligence, Circuits, Evolution,  Technology 4K UHD Wallpaper for UltraHD Desktop and TV : Widescreen and  UltraWide Display : Dual Monitor : Smartphone and Tablet Devices" /></p>
<p>Moreover, economic considerations push the trend towards on-device AI. Cloud computing, although powerful, incurs substantial costs for both infrastructure maintenance and data transfer. Companies are continually seeking ways to optimize these expenses. By pushing these models to user’s device, companies can reduce cloud computing expenditures while benefiting from enhanced performance and privacy assurances.</p>
<h2 id="heading-milliseconds-matter">Milliseconds Matter</h2>
<p>One of the reason why on-device AI processing has been successful is the latency that they help counter. The time it takes for an AI model to process data can significantly impact user experience, particularly in real-time applications such as voice assistants or augmented reality (AR) tools. On-device processing excels at reducing this latency by eliminating the need to transmit data to remote servers. This reduction in latency can be substantial—measured in milliseconds rather than seconds—which translates to a more responsive and seamless user experience.</p>
<p>This is not something that is new and is just being introduced. In fact, Autonomous Vehicles have been doing it for some time now. Driving decisions are need to be swift. Your life cannot be put in the hands of an unreliable network and the traffic congestion on the cloud models. Manufacturers started implementing chips that could make split-second decisions on the vehicle itself.</p>
<p><img src="https://sloanreview.mit.edu/wp-content/uploads/2017/05/AI-Self-Driving-Cars-Vehicles-1200-1200x630.jpg" alt="Four Management Lessons From Self-Driving Cars" /></p>
<p>But what about your laptops and phones? Why do you need split second decisions there? Well, you don’t. But it does not mean that you do not need local models running on your phone. Processing like Speech-to-text, call transcribing, keyboard autocorrect and building spatial maps for AR is already being done on your smartphones.</p>
<h2 id="heading-why-is-it-good-news-for-developers">Why is It Good News for Developers?</h2>
<p>The rise of on-device AI harbingers significant advantages for developers, who are increasingly looking to create robust and efficient applications while balancing cost and performance considerations. One of the foremost benefits lies in the economics of making applications. Imagine incurring thousands of dollars for general purpose AI tasks like translate or speech. Now imagine your customer’s device doing all that for you, for free. You do not have to incur costs for maintaining cloud resources.</p>
<p><img src="https://static.east-tec.com/images/b/blog/2024/artificial-intelligence-privacy/artificial-intelligence-privacy.png" alt="AI and privacy: Is your data safe in a tech-focused era?" /></p>
<p>Additionally, on-device AI significantly enhances privacy for users, aligning closely with growing regulatory requirements such as GDPR and CCPA.</p>
<p>Performance-wise, local processing delivers lower latency and higher reliability. Developers can create applications that respond almost instantaneously to user inputs, enhancing the overall experience by avoiding network delays. Moreover, on-device AI ensures that services remain functional even in areas with poor or no internet connectivity, making applications more reliable and resilient.</p>
<p>In summary, the shift towards on-device AI offers developers a triple win:</p>
<ol>
<li><p>Enhanced security and privacy,</p>
</li>
<li><p>Improved performance through reduced latency</p>
</li>
<li><p>Cost savings that allow for greater innovation and efficiency in app development.</p>
</li>
</ol>
<p>Apple has already released <a target="_blank" href="https://developer.apple.com/apple-intelligence/">on-device APIs for Swift</a> so developers can leverage these in their apps, Google has started rolling out access to <a target="_blank" href="https://developer.chrome.com/docs/ai?gad_source=1&amp;gad_campaignid=22378630025&amp;gbraid=0AAAAAC1d8f4krIXR3sFjAmTjbIjbwWVQG&amp;gclid=CjwKCAjw04HIBhB8EiwA8jGNbXFTiTrkb_mHRLmg-BF1W9HCxkd6aF1X8bBui4RTJ0LQEPZVV2YAaBoCQv0QAvD_BwE">on-browser Gemini Nano for Chrome extensions</a> in preview, and Microsoft has started providing <a target="_blank" href="https://blogs.windows.com/windowsdeveloper/2024/05/21/unlock-a-new-era-of-innovation-with-windows-copilot-runtime-and-copilot-pcs/">OS level APIs via Windows Copilot Runtime</a>.</p>
<h2 id="heading-hybrid-ai-what-is-it">Hybrid AI - What is it?</h2>
<p>Although on-device AI Hybrid sounds to be too good, and it actually is, but what is a world without variations? Not everyone uses the latest Surface Laptop or the latest iPhone. People are still using decade old laptops and phones. You cannot ignore them when making your app. Even if their device cannot run the latest 3B model of llama, you still need to give them the feature to translate GenZ Gibberish to formal English (if your app does that). So what do you do? You make a mix of both on-device and cloud models. If a device cannot process speech translation on-device, you make an API call to the cloud to get it done.</p>
<p>Another use case is when you have a very specialized model that you have developed and trained, and is too heavy to be deployed on a user’s device.</p>
<p>Let’s take an example.</p>
<p>You made a model that takes an excerpt from a message written in any language and transforms it into a speech by Putin.</p>
<p><img src="https://media1.tenor.com/m/wR6pqhHkDcgAAAAd/putin.gif" alt="a man in a suit and tie stands in front of a clock tower at night" class="image--center mx-auto" /></p>
<p>Now a very good blend of this would be to translate the text from any language to Russian using on-device models securely, without exposing any of your messages to the internet (well you are still doing it, in Russian), send the translated text to your server and feed it to your model, and finally the TTS model converts into a speech by Putin and sends the audio file back to the requesting device.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761659544052/2f78ab16-4745-4b6f-ac52-68dc11156fb9.png" alt class="image--center mx-auto" /></p>
<p>This approach integrates both remote servers and local hardware to optimize performance, reliability, and privacy for users. The rationale behind hybrid architectures lies in leveraging the strengths of each system: the scalability and vast computational resources of the cloud, alongside the speed and security offered by processing data within a user’s device.</p>
<p>In essence, Hybrid AI combines model training on high-powered servers in the cloud with real-time inference processed locally on-device. This ensures that complex models can benefit from the extensive compute capabilities of cloud environments while delivering immediate results without latency or dependency on network connectivity. This synergy between cloud-based resources and on-device processing allows for efficient management of computational demands while safeguarding user privacy.</p>
<h2 id="heading-how-to-decide-when-to-run-ai-locally-vs-in-the-cloud">How to Decide When to Run AI Locally vs in the Cloud</h2>
<p>Determining whether to perform AI processing locally or in the cloud involves a nuanced evaluation of several factors, including data sensitivity, computational demand, latency requirements, and cost considerations. Each approach offers unique benefits and comes with specific challenges that must be weighed carefully.</p>
<ul>
<li><p><strong>Data Sensitivity</strong>: One of the most critical considerations is the sensitivity of the data being processed. For applications handling highly personal or confidential information, such as medical records or financial data, on-device processing is often preferable. By keeping the data localized, it never leaves the user's device, thereby enhancing privacy and security. This approach is particularly important in scenarios where regulatory compliance with data protection laws like GDPR or CCPA is paramount.</p>
</li>
<li><p><strong>Computational Demand</strong>: The complexity of the AI tasks being performed also plays a significant role in decision-making. For tasks that require real-time processing and immediate responses, such as voice recognition or augmented reality, on-device processing can deliver superior performance due to its lower latency. On the other hand, tasks with higher computational demands may benefit from cloud-based processing.</p>
</li>
<li><p><strong>Latency Requirements:</strong> Latency is a critical factor in determining the most appropriate processing location. Applications that require quick responses (such as virtual assistants, gaming, or real-time monitoring systems) are best served by local AI processing due to its minimal latency and independence from network connectivity.</p>
</li>
<li><p><strong>Cost Considerations</strong>: Financial implications also play a pivotal role in this decision-making process. On-device processing can lead to significant cost savings by reducing reliance on cloud resources and minimizing data transfer charges. Conversely, cloud-based AI processing often incurs recurring costs for server maintenance, data storage, and network usage but benefits from economies of scale.</p>
</li>
<li><p><strong>Scalability Needs</strong>: Another aspect is the scalability required for the application. For applications with fluctuating computational needs or expanding user bases, a hybrid approach might be ideal, combining local inferencing for quick responses with cloud-based processing to handle peak loads or specialized tasks. This flexibility allows for dynamic resource allocation and optimal performance across different usage scenarios.</p>
</li>
<li><p><strong>Security Concerns</strong>: The security of data in transit and at rest is another crucial consideration. Local processing reduces the risk of data breaches by minimizing exposure over public networks. Cloud-based solutions, while often providing strong network-level security, may require careful configuration to ensure that sensitive data remains secure throughout its lifecycle.</p>
</li>
<li><p><strong>Developer Resources</strong>: The availability and expertise of developers with the necessary skills to manage local versus cloud-based AI processing is another factor. Developers proficient in on-device AI will need detailed knowledge of hardware capabilities and optimization techniques to ensure efficient performance. In contrast, cloud-based AI follow a more generalised with a one-fits-all approach.</p>
</li>
</ul>
<h2 id="heading-companies-that-are-already-switching-to-on-device">Companies that are already switching to on-device</h2>
<h3 id="heading-the-apple-use-case">The Apple Use Case</h3>
<p>People keep complaining about how bad Apple AI is for processing images in comparison to Samsung and Google. But what people fail to understand is that most of the processing is being done locally on your smartphone. Apple is playing the long game. While other companies are burning through their resources, providing free access to these herby and expensive models, Apple has put the power right into their customer’s devices. Is it good for brand image? No. Is it getting them publicity? Negative, but yes. Is it burning a huge hole in their pocket? Definitely not!</p>
<p><img src="https://helios-i.mashable.com/imagery/articles/023oxeinwmkPVKcWUKf9UYq/hero-image.fill.size_1200x675.v1730181621.png" alt="What is Apple Intelligence? | Mashable" /></p>
<p>On the iPhones and Macs, Apple employs on-device machine learning to power features such as Live Text, which identifies text in images for actions like translating languages or making phone calls. The M-series and A-series chips enabled instantaneous processing of this data locally without compromising user privacy by keeping all operations within the device itself. In 2025, Apple launched the AirPods 3 Pro with a chip strong enough to translate real-time conversations.</p>
<p>Moreover, Apple's emphasis on user privacy is a cornerstone of their strategy. By keeping AI processing within the device, Apple reduces the risk of data breaches and complies with stringent regulatory requirements, such as GDPR. This approach not only enhances trust with users but also aligns with broader industry trends toward enhanced data security and compliance.</p>
<p>Read more here: <a target="_blank" href="https://developer.apple.com/apple-intelligence/">https://developer.apple.com/apple-intelligence/</a></p>
<h3 id="heading-the-chrome-use-case">The Chrome Use Case</h3>
<p>Google’s Chrome browser has recently made significant strides in integrating on-device AI to enhance functionalities like text transformation. This move towards local processing underscores Google's commitment to improving user experience while addressing privacy concerns and optimizing performance.</p>
<p>Chrome is offering features like content writing, proofreading, translation and summariser among few of its use cases.</p>
<p><img src="https://the-decoder.com/wp-content/uploads/2024/09/Chrome-Supercharged-with-AI-Teaser.jpg" alt="Google adds Gemini AI upgrades to Chrome" /></p>
<p>These advancements are still in preview (as of October 2025), but developers can easily sign-up to get access to the Gemini Nano model deployed on the latest chrome models. By keeping the data processing confined to the local hardware, Chrome effectively mitigates risks associated with transmitting sensitive data over networks.</p>
<p>In summary, Google's integration of on-device AI in Chrome exemplifies a strategic approach that prioritizes both performance and privacy. By executing complex tasks locally, Chrome offers users faster, more reliable, and secure functionalities, setting a trend for other browser platforms to follow.</p>
<p>Read more here: <a target="_blank" href="https://developer.chrome.com/docs/ai?gad_source=1&amp;gad_campaignid=22378630025&amp;gbraid=0AAAAAC1d8f4krIXR3sFjAmTjbIjbwWVQG&amp;gclid=CjwKCAjw04HIBhB8EiwA8jGNbXFTiTrkb_mHRLmg-BF1W9HCxkd6aF1X8bBui4RTJ0LQEPZVV2YAaBoCQv0QAvD_BwE">https://developer.chrome.com/docs/ai/</a></p>
<h3 id="heading-the-microsoft-copilot-use-case">The Microsoft Copilot Use Case</h3>
<p>Microsoft is taking on-device AI to a whole new level by providing a universal layer within the Windows OS, by embedding AI as a fundamental OS-level capability, so even code written in C or Java can access these, giving developers broader, more systemic access to AI capabilities.</p>
<p><img src="https://i0.wp.com/robquickenden.blog/wp-content/uploads/2024/05/Screenshot_20240520_191126_Gallery-scaled.jpg?resize=2000%2C1200&amp;ssl=1" alt="Copilot+PC – Fastest, most AI-ready Windows PCs ever built. – Modern Work  and AI Blog" /></p>
<p>These changes come as a part of a new class of powerful next generation AI devices is an invitation to app developers to deliver differentiated AI experiences that run on your device. Microsoft is calling these devices the Copilot+ PCs.</p>
<p>Read more here: <a target="_blank" href="https://blogs.windows.com/windowsdeveloper/2024/05/21/unlock-a-new-era-of-innovation-with-windows-copilot-runtime-and-copilot-pcs/">https://blogs.windows.com/windowsdeveloper/</a></p>
<h2 id="heading-future-perspectives-whats-next-in-on-device-ai">Future Perspectives - What’s Next in On-Device AI?</h2>
<p>The future of on-device AI is poised for remarkable advancements driven by ongoing technological innovations and emerging trends. One significant area of development of robust hardware components, enabling even more powerful AI processors to be integrated into small devices. Innovations like Apple's M-series chips, Tensor Chips by Google, and Qualcomm’s Snapdragon X series chips illustrate this trend, with ever-smaller yet increasingly capable neural engines becoming standard in mobile devices.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761673267207/cc3a3173-0519-49cd-ad34-f1db25060539.jpeg" alt class="image--center mx-auto" /></p>
<p>Improvements in energy efficiency are crucial for sustaining on-device AI. Current research focuses on developing AI models that require less computational power while maintaining high accuracy. Techniques such as model pruning and quantization aim to optimize neural networks, allowing them to run efficiently even with limited resources. This is why nano models of 3 billion or less parameters are running on most personal devices. This progress will further enhance the feasibility of complex AI tasks being executed locally without significant battery drain or overheating issues.</p>
<p>Another promising direction is the advancement in edge computing, which extends the capabilities of on-device AI by enabling real-time data processing and decision-making at the network’s edge. This can be particularly beneficial for applications involving IoT (Internet of Things) devices, where instantaneous responses are essential. Collaborative efforts between technology companies and academic institutions will likely drive forward innovations that blend edge computing with advanced AI algorithms.</p>
<p>All these sound good for users, given their privacy. But don’t forget that companies are also benefitting from these advancements. The more processing they offload to your device, the lighter it would be on their pockets. Privacy is an added bonus.</p>
<p>In summary, the future of on-device AI is bright, driven by ongoing research in hardware miniaturization, energy-efficient processing, edge computing integration, privacy-preserving techniques, and quantisation of models. These innovations promise to make on-device AI even more pervasive and effective, enabling a new wave of intelligent applications across diverse domains.</p>
<hr />
<p>If you liked this, do show support by liking this article and <a target="_blank" href="https://www.yasharyan.dev/page/blogs">subscribing</a> <a target="_blank" href="https://www.yasharyan.dev/page/blogs">for future</a> updates. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761676756304/aeb2fd0f-a086-4e25-a70d-6ed5028f622d.jpeg" alt class="image--center mx-auto" /></p>
<p>Check out my portfolio at <a target="_blank" href="https://yasharyan.dev/">yasharyan.dev</a></p>
]]></content:encoded></item><item><title><![CDATA[Fraud prevention on banking websites using DRM]]></title><description><![CDATA[Introduction
Banking websites are among the most secure systems due to their desire for security and because many reserve banks enforce strict security guidelines. Banks adhered to this approach rigorously. Every new feature underwent thorough vettin...]]></description><link>https://blogs.yasharyan.dev/fraud-prevention-on-banking-websites-using-drm</link><guid isPermaLink="true">https://blogs.yasharyan.dev/fraud-prevention-on-banking-websites-using-drm</guid><category><![CDATA[Digital Rights Management]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[fraud prevention]]></category><category><![CDATA[banking technology]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Sun, 27 Oct 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730168538000/c72badc3-264a-4c67-bc2f-534c62881dd4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Banking websites are among the most secure systems due to their desire for security and because many reserve banks enforce strict security guidelines. Banks adhered to this approach rigorously. Every new feature underwent thorough vetting by the Reserve Bank of India to ensure compliance with stringent technological standards. This commitment to rigorous guidelines strengthened our security framework, built customer trust, and established robust defences against emerging cyber threats. However, despite these steps taken, scammers find ways to fool users into divulging information about their bank accounts. </p>
<h2 id="heading-need-for-a-solution">Need for a solution</h2>
<p>One such issue was identified during my tenure at my previous company (a major Indian Bank), where hostile parties scam users into sharing their screen, claiming to be a representative of the bank. This enabled them to trick the users into doing some form of transaction that would drain their bank balance.</p>
<p>Our first instinct was to add a black overlay on the website when screen sharing is detected. However, from a website level, there is no way to prevent or detect if the screen is being shared. So, this method was of no use. The next approach was to look into Netflix’s system, where you can’t share the playing video over Google Meet or Zoom.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730169947883/c1291ec0-3c90-4a06-bc92-72a6ed192893.jpeg" alt class="image--center mx-auto" /></p>
<ol>
<li><p>Netflix is a website running on one of the tabs of your browser</p>
</li>
<li><p>It has no control over screen-sharing activities</p>
</li>
<li><p>It is a website that is not owned by the operating system. </p>
</li>
<li><p>No desktop client is running for the application.</p>
</li>
</ol>
<p>All these points satisfied the exact state of the bank’s website. Now, how are they doing it?</p>
<hr />
<h2 id="heading-digital-rights-management">Digital Rights Management</h2>
<p>I started researching Digital Rights Management (DRM) to implement a solution on the website. DRM is a technology used by Netflix that helps it protect its video content from screen sharing. What is DRM, you ask?</p>
<blockquote>
<p>Digital Rights Management (DRM) is a set of technologies and policies used to control access to digital content, ensuring that only authorized users can use or distribute it. DRM often includes encryption, licensing, and other access control measures to protect digital media (like videos, music, images, software, and e-books) from unauthorized copying, sharing, or modification. It’s widely used by content creators, publishers, and service providers to safeguard intellectual property and monetize digital content.</p>
</blockquote>
<p>In very simple terms, DRM is an umbrella of technologies and rules that protect digital content from being copied, shared, or accessed without permission.</p>
<h3 id="heading-lets-understand-by-use-cases"><strong>Let’s understand by use cases:</strong></h3>
<p><strong>Videos:</strong> As experienced on Netflix, videos on Netflix cannot be downloaded, streamed or projected. It leads to a black screen.</p>
<p><strong>Images:</strong> DRM protection can prevent downloading, copying or printing high-quality (Yes, all of these are possible, but actual resolution and quality will vary). Watermarks are usually added to the content. </p>
<p><strong>Audio:</strong> Audio files are DRM protected and can only be played by authorized devices or applications. This also prevents downloads, recording or sharing of content via screen or audio sharingservices. </p>
<p><strong>Software:</strong> DRM protected software will only install on authorized device or/and users. It ties the application to a specific hardware or online account.</p>
<h2 id="heading-how-does-drm-work">How does DRM work?</h2>
<p>Whenever you want to enable DRM protection for a digital content, you don’t simply turn it on using some library. It is has many components that needs to be dealt with. However, as sophisticated as the technology is, it is fairly simple to implement. </p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*k2uhX6InGDiGckUxvZiAdA.jpeg" alt /></p>
<h3 id="heading-content-production-and-conversion"><strong>Content Production and Conversion</strong>:</h3>
<ul>
<li><em>Content Server</em>: The content producer or owner converts the original content into a DRM-compatible format. Once protected, the content is sent to a distributor or a Content Delivery Network (CDN) for distribution.</li>
</ul>
<h3 id="heading-key-licensing"><strong>Key Licensing</strong>:</h3>
<ul>
<li><p><em>Key Server</em>: It is thecentral part of DRM, handling the secure storage and distribution of decryption keys that allow authorized access to protected content. Major DRM licensing technologies are provided by big tech companies and are typically tailored to different ecosystems, devices, and operating systems. Key DRM systems include:</p>
<ul>
<li><p><a target="_blank" href="https://developer.apple.com/documentation/fairplaystreaming">Apple FairPlay</a> (for Apple devices)</p>
</li>
<li><p><a target="_blank" href="https://www.widevine.com/">Google Widevine</a> (for Chrome and Android devices/applications)</p>
</li>
<li><p><a target="_blank" href="https://www.microsoft.com/playready/">Microsoft’s PlayReady</a> (for Xbox, Windows applications and some Smart TVs)</p>
</li>
<li><p><a target="_blank" href="https://www.adobe.com/primetime.html">Adobe Primetime</a> (used in broadcasting)</p>
</li>
<li><p><a target="_blank" href="http://www.marlin-community.com/">Marlin</a> (open-standard DRM developed by a consortium that includes companies like Sony, Philips, and Samsung).</p>
</li>
</ul>
</li>
</ul>
<p>    Each DRM solution is typically optimized for a specific range of devices and applications, focusing on secure content playback within that ecosystem. </p>
<h3 id="heading-content-distribution"><strong>Content Distribution</strong>:</h3>
<ul>
<li><em>Protected Content Server</em>: The protected content, now DRM-encoded, is stored on a distribution server. When users request content, they receive a version that is DRM-protected and requires a license for access.</li>
</ul>
<h3 id="heading-user-access"><strong>User Access</strong></h3>
<ul>
<li><p><em>DRM Client:</em> The consumer uses a DRM client (such as a specific app or media player) to request and receive DRM-protected content. The client also communicates with the licensing service to obtain the necessary decryption key. Only authorized users with the correct license can access and view the protected content.</p>
</li>
<li><p><em>Internet as the Medium:</em> All these interactions occur over the internet, connecting the content server, key server, protected content server, and the user’s DRM client to enable secure access to digital media.</p>
</li>
</ul>
<h2 id="heading-fraud-prevention-using-drm">Fraud prevention using DRM</h2>
<p>By applying DRM, the aim was to restrict the ability to share sensitive information during screen-sharing sessions. This would help prevent scammers from viewing confidential data, even if users unknowingly shared their screens. How do we achieve this? There are key things that needs to be understood before going in on the approach:</p>
<ul>
<li><p>Applying protection to the entire website is not possible. Certain areas need to be identified that require protection. In banking context, this can be the area that requires you to input your OTP, the area where your balances are visible, etc.</p>
</li>
<li><p>It will not affect any images or colored UI elements</p>
</li>
<li><p>It can be costly to implement based on the amount of traffic your website receives. Weigh in your pros and cons before implementing this. For the bank’s application, the amount we were spending on the website was for this implementation was way cheaper than what the frauds were costing us. </p>
</li>
<li><p>It can lead to slower load time. </p>
</li>
<li><p>DRMs are usually compatible with most modern devices, but it may affect older systems. </p>
</li>
</ul>
<h3 id="heading-implementation">Implementation</h3>
<p>Here, a DRM protected video is added to the background of the webpage/section of the webpage and content is placed over it. So now when the screen is being shared, the underlying video turns black, and the content that is on top of it, which should be set to black color, is camouflaged. Let’s understand by examples.</p>
<h3 id="heading-case-1-drm-video-localized-to-a-section-of-the-website"><strong>Case 1: DRM Video localized to a section of the website</strong></h3>
<p><img src="https://cdn-images-1.medium.com/max/800/1*eeS5y3Iov8XJIOpAsNtoxw.png" alt="Case 1: DRM protected video is localized to a particular section of the website (as seen by the user)" /></p>
<p>In this illustration, the DRM-protected video is taken that is a video of a solid color same as the background color of the rest of the website, and is localized to the background of the input field. The input field’s background color is set to transparent making it pass through, and the text colour is also set to black. The video is a series of images played on a loop, where the image is of the same colour as the original background colour of the webpage so that it blends in with the website. </p>
<p>What happens in this case when the screen is being shared? The localized section of the page is camoflaged and not visble over screen sharing. In this case, when the user is interacting with the input fields, scamster will not be able to see clearly what the user is interacting with and won’t be able to misguide them.</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*HH-3x-YRKXHfS-66HytlaA.png" alt="Case 1: DRM protected video is localized to a particular section of the website (as seen over screen share)" /></p>
<h3 id="heading-case-2-drm-video-on-the-complete-website"><strong>Case 2: DRM video on the complete website</strong></h3>
<p><img src="https://cdn-images-1.medium.com/max/800/1*ELohkRbjrJ6suBosCUzozw.png" alt="Case 2: DRM protected video is set as the background of the website (as seen by the user)" /></p>
<p>In this illustration, the DRM-protected video is taken as the background color of the entire website and placed as a background. Point to understand here is that the entire website cannot be made transparent. There got to be few component that are colored differently, or have a separate background color. DRM video will not be able to help in this case, but again, the point of the implementation is to only protect sensitive content like the balance and account number. So, in this case, the screen when being shared will look like this:</p>
<p><img src="https://cdn-images-1.medium.com/max/800/1*yuC_te9TpA1h3yY5LhdPzQ.png" alt="Case 2: DRM protected video is set as the background of the website (as seen over screen share)" /></p>
<p>What happens in this case when the screen is being shared? The localized section of the page is camoflaged and not visble over screen sharing. In this case, when the user is interacting with the input fields, scamster will not be able to see clearly what the user is interacting with and won’t be able to misguide them.</p>
<h2 id="heading-technological-implementation"><strong>Technological Implementation:</strong></h2>
<ol>
<li><p><strong>Creation of DRM-protected video</strong></p>
<ul>
<li><p>A video is generated that is a series of multiple images of the same colour and type per the website's requirement. This video serves as a dynamic background for the webpage. For example, if a webpage has a grey background, a video that only contains a grey background is generated.</p>
</li>
<li><p>This video is encrypted using DRM systems like Widevine, PlayReady or FairPlay based on the target system.</p>
</li>
</ul>
</li>
<li><p><strong>DRM Player configuration</strong></p>
<p> A DRM Player is selected for playing the DRM-protected videos like Shaka Player, VdoCipher or Bitmovin, and the following configurations are done:</p>
<ul>
<li><p>Play video on loop</p>
</li>
<li><p>Hide video control buttons like play/pause, forward, backwards, and progress bar.</p>
</li>
<li><p>Caption control is set to hide.</p>
</li>
<li><p>Volume control is set to hidden.</p>
</li>
<li><p>Keyboard shortcuts are disabled.</p>
</li>
<li><p>Automatic play on load is enabled.</p>
</li>
</ul>
</li>
<li><p><strong>Embedding DRM-Protected Video</strong></p>
<ul>
<li>The video player is embedded to cover the entire screen or a part of the webpage that needs protection. This video stays at the base layer of the website, and all other components have a z-index higher than this player so that it does not affect the website's presentation.</li>
</ul>
</li>
</ol>
<ul>
<li>The video runs continuously in the background and acts as an overlay or visual distraction when sharing the screen, making it difficult for unauthorized parties to capture sensitive information.</li>
</ul>
<ol start="4">
<li><p><strong>Modification to the DOM</strong><br /> For this implementation to be successful, the website needs to be modified in a certain way to accommodate the player. These changes need to be made because the video turns black when sharing is turned on. So, this implementation works perfectly when the text on top of the video is black, and the boxes are transparent (as illustrated above). Images are not affected in this approach and are visible.</p>
</li>
<li><p><strong>System Extensions</strong><br /> While the initial use case focuses on banking websites, this method can be extended to other industries, such as e-commerce, healthcare, or government portals, where sensitive user data is handled and requires protection from unauthorized exposure.</p>
</li>
<li><p><strong>Browser and Device Compatibility</strong><br /> The system is designed to work across major browsers (Chrome, Firefox, Edge, Safari) that support DRM standards. Similarly, it supports various devices, including desktops, laptops, and mobile platforms, that comply with the respective DRM protocols.  </p>
</li>
</ol>
<hr />
<p><em>That’s it for this blog. I spend a lot of time researching topics before I write because I believe in delivering well-informed, high-quality content that adds genuine value to my readers. It takes a good amount of time and effort, but knowing that I’m providing accurate, insightful information makes it worthwhile. Thank you for reading, and I hope this helps deepen your understanding of the topic.</em><br /><em>Please consider supporting me if you like my blogs:</em><br /><a target="_blank" href="http://buymeacoffee.com/yasharyan">buymeacoffee.com/yasharyan</a></p>
]]></content:encoded></item><item><title><![CDATA[The Inception of Digital Rupee]]></title><description><![CDATA[Introduction
Did you hear about the new Digital Rupee pilot program flagged off by the Government of India? Initially proposed in the 2022 Union budget, it has been released for a closed user group for now but will eventually be rolled out for the ge...]]></description><link>https://blogs.yasharyan.dev/the-inception-of-digital-rupee</link><guid isPermaLink="true">https://blogs.yasharyan.dev/the-inception-of-digital-rupee</guid><category><![CDATA[Blockchain]]></category><category><![CDATA[Financial Services]]></category><category><![CDATA[india]]></category><category><![CDATA[central banks]]></category><category><![CDATA[Blockchain technology]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Fri, 18 Aug 2023 04:38:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672973357055/faaff4c4-170f-42b5-b82c-b678620cbe40.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Did you hear about the new Digital Rupee pilot program flagged off by the Government of India? Initially proposed in the 2022 Union budget, it has been released for a closed user group for now but will eventually be rolled out for the general public. This currency makes completely digital transactions possible, powered by blockchain technology.</p>
<p>Governments worldwide have hesitated to embrace public cryptocurrencies because of apparent concerns about law and order. Still, some economies also fear that cryptocurrencies will undermine traditional financial systems and the role of central banks. CBDCs are a possible solution to these concerns.</p>
<h2 id="heading-central-bank-digital-currency">Central Bank Digital Currency</h2>
<p>Central bank digital currency (CBDC) is a digital version of a country's fiat currency issued and backed by its central bank. CBDC aims to provide a secure and efficient way for the central bank to issue, distribute, and track digital currency while giving consumers an alternative to traditional physical cash. One potential benefit of CBDC is that it could make it easier for the central bank to implement monetary policies, as it would have greater control over the supply and demand of digital currency.</p>
<p>Close to 105 countries are exploring digital currencies. Fifty of these countries, including India, are in various phases of adoption, and 11 have already launched a digital currency following successful pilot projects. The Bahamas was the first to launch a digital currency officially, followed by Nigeria and the Eastern Caribbean Union. India, UAE, Ghana, South Africa, Malaysia, Singapore, and Thailand have also launched their pilot program.</p>
<h2 id="heading-the-digital-rupee-in-india">The Digital Rupee in India</h2>
<p>To understand the digital rupee, answer this question: Where would you keep your money if you did not have a bank account and wanted to avoid keeping that money in a physical safe somewhere? This is where the digital rupee comes into the picture. It is like money in the wallet, but a digital one, on the blockchain. For people with little experience with cryptocurrencies, Digital Rupee is like the Ether you store in your Ethereum wallet.</p>
<p>The digital rupee is the digital sibling of the physical Indian Rupee cash issued and backed by the Reserve Bank of India (RBI). All your notes and coins can be easily converted into a digital rupee. These can be done using government-approved crypto wallets on the Google Play Store or Apple App Store.</p>
<p>Looking at the screenshots of all these apps, it seems like all the banks are using a standardized app provided by the RBI. All look almost similar, with little customization to the color schemes and branding.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690614862753/0efe0665-1c7d-4bb5-9d89-927f92c9b326.png" alt class="image--center mx-auto" /></p>
<p>During the early stages, the Digital Rupee could only be used on an Android device, but iOS device apps have also been released. I, however, believe that various customized hardware wallets may enter the market once they are out of CUG. Only selected banks have been authorized to provide a digital wallet to users for transactions. I am unsure if there are plans for the government to provide access to third-party entities so that they can build their own applications over this infrastructure.</p>
<p>For the pilot program, the RBI had given access to four banks in the first phase, including State Bank of India (SBI), ICICI Bank, Yes Bank, and IDFC First Bank, while Bank of Baroda, Union Bank of India, HDFC Bank, and Kotak Mahindra Bank joined in the second phase. Punjab National Bank, Canara Bank, Federal Bank, Axis Bank and IndusInd Bank joined soon after that.</p>
<blockquote>
<p>“The results of both the pilots so far have been satisfactory and in line with expectations”<br />-The Reserve Bank of India</p>
</blockquote>
<h2 id="heading-whats-changing">What's changing?</h2>
<p>When you make a transaction using any current systems (like NEFT, RTGS, or IMPS), they need to be settled by the banks. The government provides these settlement services through the government counterparty called <em>The</em> <em>Clearing Corporation of India Limited</em> or <em>CCIL</em>. This institution is necessary to avoid discrepancies in the money market and fend off clashes between the financial institutions in the country. This is a time-consuming and resource-hungry process. What Digital Currency brings to the table is the fast processing of these settlements through the blockchain's ledger system. In the current system, only the participating banks know about the transactions, but with CBDC, the ledger is visible to all so that anyone can verify the data.</p>
<p>Another point that needs to be understood is that the Digital Rupee, unlike the fiat currency, is the liability of the Reserve Bank and not the commercial bank.</p>
<p>The government is planning to release two types of CBDCs. The first one is the CBDC-R (retail) which would be used for retail settlements, i.e. for general users like you and me. In contrast, the CBDC-W (Wholesale) would be focused on settlements within financial institutions.</p>
<h2 id="heading-does-cbdc-mean-more-cash">Does CBDC mean more cash?</h2>
<p>With the digital rupee, there is no physical money association. To understand this, let's take an example. Let's say there are ₹30 lakhs(3 million) worth of banknotes in an institution, which needs to be converted to a digital alternative. You can either convert it to digital cash or convert it to a CBDC.</p>
<ul>
<li><p>Converting to digital cash: If you convert the banknotes to digital money, you just create a digital identity for all those ₹30 lakhs notes. If you are familiar with Linux OS, these digital cash are something like Symlinks. Whatever happens to these digital cash happens to the physical ones. If you send ₹10 lakhs to Singapore, you send the cash too.</p>
</li>
<li><p>Converting to digital rupee: If you are converting the cash to digital rupee, you'll have to destroy all the physical cash and mint new digital rupees if you are the RBI or exchange the cash with existing digital rupees worth ₹30 lakhs.</p>
</li>
</ul>
<p>The digital rupee introduction does not mean there will be more cash in the country or that a parallel currency will be in existence. It will co-exist with the already present rupee ecosystem.</p>
<h2 id="heading-what-are-the-benefits-of-cbdc">What are the benefits of CBDC?</h2>
<p>There are certain benefits to using this type of currency. These are a few of them:</p>
<ul>
<li><p>It could increase financial inclusion by enabling users to use an alternative payment option for individuals who may not have access to traditional financial services.</p>
</li>
<li><p>Cost reduction is one of the motivations for adopting the digital rupee. As discussed earlier, the settlement process of the current financial infrastructure is tedious and expensive. The printing, storage, transportation and replacement of banknotes are a hit on the taxpayers' money.</p>
</li>
<li><p>CBDC ensures enhanced security. Your currencies are secured in a crypto-wallet which only you can access. You can increase the security of your wallet by adding two-factor authentication. This way, even if your device is lost, you don't have to worry about your money going anywhere.</p>
</li>
<li><p>CBDC can also potentially be used offline, thus removing dependency on network stability for transactions.</p>
</li>
<li><p>It can prevent counterfeiting and ensure the integrity of transactions.</p>
</li>
<li><p>It can provide better traceability and transparency in financial transactions, which could help mitigate financial crimes.</p>
</li>
</ul>
<h2 id="heading-are-there-any-disadvantages-to-cbdc">Are there any disadvantages to CBDC?</h2>
<p>There are several potential disadvantages to implementing a CBDC:</p>
<ul>
<li><p>Implementing CBDC requires the setup of significant technical infrastructure.</p>
</li>
<li><p>With increased traceability and transparency, many privacy concerns arise, which would make citizens hesitant to adopt CBDC.</p>
</li>
<li><p>Many would not want to shift to using CBDC without familiarity with the technology.</p>
</li>
<li><p>Central banks may lose some control over the money supply and financial system if many transactions shift to a CBDC. They can track the movement, but they cannot take the money.</p>
</li>
<li><p>Digital currencies, including CBDCs, are vulnerable to cyber attacks, which could compromise the security and integrity of the currency.</p>
</li>
</ul>
<h2 id="heading-frequently-asked-questions">Frequently Asked Questions</h2>
<h3 id="heading-how-is-digital-rupee-different-from-upi">How is Digital Rupee different from UPI?</h3>
<p>It is like comparing a ₹10 note with NEFT. UPI is a system that allows the transfer of money between parties, but Digital Rupee is like cash but digital. If UPI is ever made for Digital rupees, you will transfer your digital rupees to others using UPI.</p>
<h3 id="heading-how-is-digital-rupee-different-from-cryptocurrency">How is Digital Rupee different from Cryptocurrency?</h3>
<p>Unlike cryptocurrencies, which are commodities, the Digital Rupee can be considered a Fungible Token (not an NFT). Cryptocurrencies like Ether, XPR, and Bitcoin are all on a <a target="_blank" href="https://www.blockchain-council.org/blockchain/public-vs-private-blockchain-a-comprehensive-comparison/#:~:text=A%20private%20blockchain%20is%20a%20permissioned%20blockchain.%20Private,this%20leads%20to%20reliance%20on%20third-parties%20to%20transact.">public ledger</a> (chain) and can be bought or sold by anyone. These cryptocurrencies have no issuers. Digital Rupee is based on a private, permissioned blockchain that is limited to users allowed by the chain's owner; in this case, it is the Government of India.</p>
<h3 id="heading-blockchain-technologies-are-known-to-be-secure-and-anonymous-how-do-they-avert-illegal-activities">Blockchain technologies are known to be secure and anonymous. How do they avert illegal activities?</h3>
<p>Digital Rupee is controlled by banks (powered by), so if you want to exchange your rupees for digital rupee, you will have to go to banks. If you wish to exchange your digital rupee and get physical cash, you will have to go to the banks. In one way, you will have to go to the banks to either buy your way in or out through banks that the RBI highly monitors and everything on the platform is also monitored. So, the probability of illegal activities on this platform is very low.</p>
<h3 id="heading-does-this-mean-banks-are-going-away">Does this mean banks are going away?</h3>
<p>Well, no. The Indian government has authorized the banks to distribute these tokens. Like it or not, banks are still going to be there. You will still need the banks if you want to earn interest by depositing your money or if you want to apply for a loan. All traditional bank functionalities will still be there.</p>
<h3 id="heading-why-would-i-want-to-use-digital-rupee-over-upi">Why would I want to use Digital Rupee over UPI?</h3>
<p>Remember when your last UPI transaction was stuck, and you were left with an ice cream in your hand and the shopkeeper staring at your face? You try again, but your internet is not working, and you have to ask someone for their hotspot. Or have you double-paid someone accidentally? This issue is solved by the digital rupee, as the transactions are instantaneous. When you pay someone through UPI, both your and the receiver's bank are working in the background to resolve the payment request. That is what causes the delay in your transactions. However, with Digital Rupee, no intermediaries are involved, so transactions are lightning-fast.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring composite APIs]]></title><description><![CDATA[!! NOTE: This is a developing story and in review+research phase
Introduction
Have you heard of composite APIs? Do you know what it does? Like most of the freshers out there, even I had no idea about such architecture. I recently joined a company as ...]]></description><link>https://blogs.yasharyan.dev/exploring-composite-apis</link><guid isPermaLink="true">https://blogs.yasharyan.dev/exploring-composite-apis</guid><category><![CDATA[composite api]]></category><category><![CDATA[APIs]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Mon, 28 Nov 2022 02:05:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1669773834695/MtXAqrQcv.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>!! NOTE: This is a developing story and in review+research phase</strong></p>
<h2 id="heading-introduction">Introduction</h2>
<p>Have you heard of composite APIs? Do you know what it does? Like most of the freshers out there, even I had no idea about such architecture. I recently joined a company as a product manager, and my area of work includes managing the development of the company's composite API. To help you better understand what a composite API is, read along with this scenario:</p>
<p>Assume that you are developing the backend for a food delivery service. A general flow for ordering food for a signed-in user looks like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667486081955/q057PNlV5.png" alt="image.png" class="image--center mx-auto" /></p>
<ol>
<li>The user selects the restaurant they want to order food from, and the UI fetches the list of items available for that restaurant and their respective prices.</li>
<li>The user selects the food item(s) from the displayed list, which is then added to the user's cart.</li>
<li>To calculate the total amount of items in the cart, the UI makes specific calls to the backend. These calls involve:<ul>
<li>Applying discounts or rebates due to the user's existing subscription(s) </li>
<li>Calculation of taxes on the deliverables</li>
<li>Checking for price fluctuations of the items in the cart</li>
<li>Charges based on high-demand surges</li>
<li>Calculation of delivery charges using Maps and Trafic API.</li>
<li>Factoring extreme weather conditions using some Weather API to increase the charges or disallow the user to place the order altogether.</li>
</ul>
</li>
<li>The user is then taken to the checkout page and eventually to the payment gateway for making the transaction. </li>
</ol>
<p>Well, for JavaScript developers, this is a callback hell. You will have to wait for each API call to respond before invoking the next one so that you can pass on the data of the previous call to the next one. Damn those poor devices with entry-level specs handling all that business logic. </p>
<p>Hello, wait a second. Did you also notice how many ways this can fail? If API <em>A</em> isn't working, <em>B</em> won't work. If <em>C</em> isn't working, <em>D</em> won't. It's a freaking domino effect. Imagine how much focus handling all these exceptions will require. 
Now, now. Don't be an idiot. You can't use <code>try-catch</code> here if you want to make a robust system. It's much more than "If it works, it works, or else show them the error page." Suppose the expected time to deliver the product can't be displayed because of weather API failure. In that case, you can't deny the customers to make an order because of an "Internal Error" message. Say goodbye to your revenue. </p>
<p>I am sluggish when I am working with the front end. Matching the UI from the designs from Figma, applying UI validations, and handling edge cases is already too much. I just want to consume data from an API and render the UI according to that. But in this case, it is a nightmare. It would be much easier to get data in a single API call using composite APIs. I can focus more on building UI than handling data. The following diagram depicts how composite APIs work.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1667594285716/F8L8u1fh-.png" alt="image.png" class="image--center mx-auto" /></p>
<h2 id="heading-what-is-a-composite-api">What is a composite API?</h2>
<p>Usually, a backend provides micro-level control to the entire system, and it is often the case when there is a need for data from multiple sources in the system. This is where the composite API comes into play. Developers working on composite APIs are not creating new resources. They are consuming existing endpoints to organize the data in a format required by the UI. In other words, they batch multiple API calls into one by handling business logic. </p>
<p>In fact, composite APIs are built over the existing architecture of the product and are usually introduced later over the product's lifetime. Your organization might have been using traditional APIs for fetching data for years before realizing the need for a composite API. Your product should work just fine even without it. </p>
<h2 id="heading-why-do-we-need-composite-api">Why do we need composite API?</h2>
<p>Every other developer is tempted to use new technology in their existing project. But just like blockchain is not the solution to every product, the composite API has only specific use cases. 
A composite resource is used when:</p>
<ul>
<li>A series of API requests must be made to obtain the final data.</li>
<li>There are multiple CRUD operations taking place.</li>
<li>Output of one request is the input of the subsequent request</li>
<li>When there is a microservice architecture, data must be fetched from different services simultaneously. </li>
</ul>
<h2 id="heading-benefits-of-using-a-composite-api">Benefits of using a composite API</h2>
<ul>
<li>On platforms where the number of API calls you make is counted and limited, a call to a composite API is counted as a single API call. </li>
<li>Although the composite API makes the same number of calls as the traditional model, the load on the frontend is relatively low as the composite platform makes all the main calls. </li>
<li>It keeps the frontend code clean because failure logic is handled by the composite platform and does not need to be developed on the frontend. Suppose any API fails, and the entire batch process can't be completed successfully. In that case, the composite platform will handle the logic, and the frontend needs to address the screen rendering accordingly.</li>
<li>It can reduce server load and improve application performance.</li>
</ul>
<h2 id="heading-when-to-and-not-to-use-a-composite-api">When to (and not to) use a Composite API?</h2>
<p>Composite API is a fantastic solution to handle the complexity of fetching data from the frontend, but it is not the answer to every problem. 
Use this architecture when sequential calls to various services for data fetching are needed. If it is just an update job, use a Batch API
I have seen people requesting for endpoints to be made in composite API just for making one freaking call. People, it is not a proxy!
Your architecture should not change the state of the backend directly. It instructs the existing APIs to update the state. Don't expect your Composite API to be an alternative to the original backend. 
The architecture can introduce an architecture-level storage solution to store temporary data. It can also cache data for fast reads. </p>
<h2 id="heading-closing-notes">Closing notes</h2>
<p>There are very few resources available on the internet that talk about Composite APIs. However, resources by Salesforce are good enough to know what it is and how you can create a composite API using their platform. It also lays down a guide for developing a Composite API architecture. It is the company's decision on how they want to structure it. The response pattern shown on the Salesforce docs is very different from what my company uses, and it must be the same for all various other organizations using this architecture. And that's the beauty of software development. There is no hard and fast rule to doing something. You can be imaginative. </p>
]]></content:encoded></item><item><title><![CDATA[What the fish is Edge Computing?]]></title><description><![CDATA[The IT kingdom is expanding at a rate of knots, faster than it was growing before the pandemic hit. More people are using phones and IoT devices with internet capabilities. We see 4G being adopted even in under-developed countries, and the developed ...]]></description><link>https://blogs.yasharyan.dev/what-the-fish-is-edge-computing</link><guid isPermaLink="true">https://blogs.yasharyan.dev/what-the-fish-is-edge-computing</guid><category><![CDATA[THW Cloud Computing]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Mon, 25 Apr 2022 07:35:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1650871501721/NKiI0LH6B.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The IT kingdom is expanding at a rate of knots, faster than it was growing before the pandemic hit. More people are using phones and IoT devices with internet capabilities. We see 4G being adopted even in under-developed countries, and the developed and the developing world are preparing for the 5G advent. The massive expansion of user-created and organizational data is the new hurdle the IT crowd is battling these days. Businesses can no longer keep up with the pace of change if they are not ready to be flooded with data and possess the ability to process them efficiently. While the cloud once promised to offer everything businesses need, it no longer holds that promise.</p>
<p>While it is true that there is no substitute for the cloud in the near future (unless you are a believer of web3, WAGMI!), it is also true that by storing essential data outside your organization's premises, you are inviting hackers to steal that data. It is also true that by transferring terabytes of data every day to and from your cloud service provider, you are tightening your belt.</p>
<p>So, if you are not adopting to the edge, your online business will most definitely fail.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://media.giphy.com/media/3o7TKtxXWkCZ94s7i8/giphy.gif">https://media.giphy.com/media/3o7TKtxXWkCZ94s7i8/giphy.gif</a></div>
<p>Okay, that statement is not entirely true. Edge computing is still in its very early stages of adoption, and your business need not make a switch just yet. The IoT industry is the primary target of the Edge computing space. But before I explain the whys and the hows, check out this excellent business idea.</p>
<p>Imagine having a DVD player that sends data from a disk to a server in some other continent for decoding and sends a playable stream of data that could be displayed on your television. Sounds futuristic and ludicrous at the same time, right? You buy the DVD player, buy the disk that has the movie you want to watch, and also pay for the monthly subscription charges for the online video codec. Think about what purpose the DVD player plays? Just send and receive data from the servers? Could I have made the DVD player any dumber than this? </p>
<p>Look around you before you think this is dumb and foolish and want to file a lawsuit against me. There probably might be an equally dumb device sitting around you. Does it look something like this?</p>
<p><img src="https://cdn-images-1.medium.com/max/1000/0*mGkLO-Wo8lC_nXiC" alt /></p>
<p>Think about it. Your Alexa device is just a speaker that connects to the internet for data transfer. When you ask Alexa something, it compresses your command and sends it wherever the data is being processed, making a few API calls to fetch the appropriate data and then sending the response voice to the device to play. Does my idea sound foolish now?</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://media.giphy.com/media/yLuYeOl9LivBe/giphy.gif">https://media.giphy.com/media/yLuYeOl9LivBe/giphy.gif</a></div>
<p>That might not have been a compelling enough reason for the device being dumb for most of you. Well, there are three significant issues related to the current model. </p>
<h2 id="heading-what-benefits-does-edge-computing-offer">What benefits does edge computing offer?</h2>
<ol>
<li><p>Latency: Let's take an example. You want to order Pizza, but there is no phone in your house. The only phone that can be used is in your office, 5 kilometers away. So to have pizza with your family, you would have to take everyone's order, drive to your office, repeat the order to the receptionist, who in turn will call the pizzeria, wait for the delivery agent to give you the pizza in your office and then finally take it back to your home and distribute it. See the problem? Wouldn't it have been better if you had a phone for yourself? You could have saved a lot of time. This example accurately depicts what smart assistant devices are. They do not have their own "phones." If you ask it something, it has to "drive to its office"(Amazon servers/Google servers/Apple servers) to make a request. This, in turn, causes latency, which results in poor user experience. Suppose the smart assistant devices could make API calls independently after analyzing the voice input. In that case, the delay in response could have been reduced. </p>
</li>
<li><p>Security and Privacy: Would you be comfortable making calls to your loved ones in front of a stranger every time? Your smart assistant devices are always listening to whatever is being spoken around them; that's why they can pick up the wake-up word anytime. What's to say they are not sending everything to their servers, even when they are not being spoken to? I am not saying that they do, but who knows? If it was possible to parse the speech at the point of origin, security could be increased drastically. </p>
</li>
<li><p>Bandwidth: When everything is processed at the server, everything generated at the origin needs to be sent to the server. This requires a lot of bandwidth. You might argue that a voice sample transferred might not be much, correct, but we need a different example here. IoT devices are not just limited to these smart speakers. For instance, a vast compound with about 20 high-definition surveillance cameras installed that a security firm is monitoring. All the data that each of the 20 cameras captures are being sent to the security firm for analysis over the internet. A camera of 1080p resolution takes about 35 GB on average per day. For a 20-camera network, this would be close to 700GB per day. That's a lot of bandwidth you would be dedicating to cameras. What if the cameras transmit feed to the firm only when they detect any movement? That would be a lot of data saved, wouldn't it?</p>
</li>
</ol>
<h2 id="heading-gosh-would-you-tell-me-what-it-is">Gosh!!! Would you tell me what it is???</h2>
<p>Now that you know the problem with the current system let me bring to your attention what edge computing brings to the table. Edge computing is not as obtuse as you may have anticipated. It is not replacing the cloud; it brings the cloud to you. Don't get it? Let me simplify it for you.</p>
<p><img src="https://cdn-images-1.medium.com/max/1000/1*sOOdwM13CQHJbuuC6QhBgA.png" alt /></p>
<p>Remember the smart assistant example we just talked about? Since the voice sample is being sent to the servers, it causes latency and privacy concerns (that is secondary in this case, focus on latency). Edge computing is all about placing computational power close to the source of data generation or at the "edge of the network," as it is called. So, for example, your Alexa device can have the ability to recognize the said command on its own and make various API calls autonomously without bothering the centralized servers every time. The latency would be significantly reduced. But this is not just some pretend-talk. Amazon has already been working on these edge devices and has developed a powerful chip that can handle voice recognition.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/CNET/status/1442894647097888771">https://twitter.com/CNET/status/1442894647097888771</a></div>
<p>Another use case can be found in a self-driving car. Autonomous cars need to act on data from their sensors in real-time. It cannot wait for a response from the servers to make a turn. What if there is a connectivity issue? What if the instruction received is too late? This can all be tackled by adding edge devices to the car, which can make decisions by analyzing real-time data from all the sensors in the vehicle and sending periodic information to the data center. </p>
<p>A smart farm that monitors the health of crops need not transfer data 24x7. Installed edge devices can check the temperature and automatically increase or decrease the temperature after sensors report the current temperature. Upload of necessary data can be done every once in a while.</p>
<h2 id="heading-are-there-downsides-to-edge-computing">Are there downsides to Edge-computing?</h2>
<ol>
<li><p>Adding more devices that handle sensitive information can increase the number of vulnerable points for hackers to exploit. </p>
</li>
<li><p>Adding more hardware to the network would require increased capital investments.</p>
</li>
<li><p>Maintenance costs will shoot up because of added infrastructure</p>
</li>
<li><p>It would be crucial for the development teams to design the edge equipment so that it discards only irrelevant data without bleeding crucial data needed for further analysis.</p>
</li>
</ol>
<h2 id="heading-tldr">TL;DR</h2>
<p>To summarize, edge computing essentially places powerful devices closer to the edge of the network (end-user devices) that perform analysis on real-time data and give results and only send necessary data to the centralized servers. It's all about bringing the cloud closer to you without you even noticing it. It is still in very early phases, and worldwide availability of edge services is far, but it is forecasted that the edge-computing industry will become a <a target="_blank" href="https://medium.com/r/?url=https%3A%2F%2Fwww.globenewswire.com%2Fnews-release%2F2022%2F03%2F03%2F2396216%2F29442%2Fen%2FEdge-Computing-Market-Size-Worth-61-14-Billion-by-2028-CAGR-38-4-Grand-View-Research-Inc.html%23%3A~%3Atext%3DMulti%252Daccess%2520Edge%2520Computing%2520Market%2Cby%2520Grand%2520View%2520Research%252C%2520Inc.">multi-billion dollar industry by 2028</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Zero Trust Networks]]></title><description><![CDATA[Before talking about Zero Trust, what it is, and how companies are implementing it, let us take up an example of an imaginary city.
Hypothetical case
A long time ago, a walled empire known as Talevaria was considered one of the safest and most loved ...]]></description><link>https://blogs.yasharyan.dev/zero-trust-networks</link><guid isPermaLink="true">https://blogs.yasharyan.dev/zero-trust-networks</guid><category><![CDATA[#cybersecurity]]></category><category><![CDATA[networking]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Tue, 25 Jan 2022 19:11:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1643135343046/wYq-dvDak.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before talking about Zero Trust, what it is, and how companies are implementing it, let us take up an example of an imaginary city.</p>
<h2 id="heading-hypothetical-case">Hypothetical case</h2>
<p>A long time ago, a walled empire known as Talevaria was considered one of the safest and most loved realms of the medieval age. No one outside the kingdom was allowed in, and no one from inside was allowed to go out. People inside were free to move around the confines of the wall without any restrictions, and people from the outside wished they could be a part of the fantastic kingdom. Talevarians had unrestricted access to every resource inside the walls, and there were no law enforcement officers maintaining order because the King trusted his subjects. Kingdom's entire infantry was deployed at the borders, protecting it from the neighboring kingdoms as it was frequently under attack because of the abundance of resources and its strategic importance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643135463689/90mCyW0sI.png" alt="image.png" /></p>
<p>While the brave warriors of this kingdom were busy fighting at the border, an uprising started taking over provinces, one by one. Soon, the entire kingdom was in the hands of these rebels. There was no way to control the situation as all the law enforcement personnel were deployed at the kingdom's border.
This case is the classic example of traditional network architecture, and the Zero Trust model can help solve these issues.</p>
<h2 id="heading-zero-trust-networks">Zero Trust Networks</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643135591527/0IVflmdMZ.png" alt="image.png" /></p>
<h3 id="heading-what-is-it">What is it?</h3>
<p>The concept of Zero Trust is not new. It was coined in 1994 by Stephen Paul Marsh for his doctoral thesis, but John Kindervag repopularized it in 2010 while working as a researcher at Forrester Research. Although it appeared to be an overkill back then, it makes more sense now than it ever did before.
There are many definitions of Zero Trust on the internet, but the one from <a target="_blank" href="https://www.techtarget.com/searchsecurity/definition/zero-trust-model-zero-trust-network#:~:text=A%20zero-trust%20model%20is,device%20authentication%20throughout%20the%20network.">TechTarget</a> made the most sense:</p>
<blockquote>
<p>It is a security framework that fortifies the enterprise by removing implicit trust and enforcing strict user and device authentication throughout the network.</p>
</blockquote>
<p>So, in simple terms, the Zero Trust model promotes the need to verify every device that connects a private network, irrespective of their location, be it inside a secured perimeter, or outside, from a remote location, or the device state.
In pre-pandemic times, the traditional model required every resource that the organization needed to be within the 'protected boundaries' of the network. However, in present times, that is not possible. With more than half of the workforce working remotely, this model fails catastrophically. It was easier for cybersecurity professionals when pre-configured devices were provided to employees. Nevertheless, this option was heavier on the company's pocket. The introduction of the BYOD (bring-your-own-devices) system has become more challenging for them to maintain order on the network. The users' device settings could change (or be intentionally changed), could be affected with malware, or compromised by an attacker, which could leave the network vulnerable. What if the company uses resources that are scattered all over the internet. With the increase in the importance and volume of data, small and medium enterprises opt for cloud services like AWS, GCP, or Azure. This model does not sit well with the traditional on-premise model as the internal network requests these cloud servers for resources. Alternatively, consider this- with more than 50% of the workforce working from their homes, it is not practically possible to check every connection for vulnerability.
What if I tell you that Zero Trust solves these issues. By default, Zero Trust bears the motto, "Never trust, always verify." You read that right. No one is trusted, be it the CEO's device, the CTO's, or even the CISO's. Zero Trust takes a hostile approach towards security. Whenever a device is connected to the network, it is allegedly hostile. Username and password authentication are not enough. With Zero Trust, enterprises are bidding goodbye to password-based authentications. Multi-level authorization is required, but that is not it. Here is where it gets interesting.</p>
<p>For example, Emiko is an employee in a multinational tech company living in Tokyo. She had to shift to the WFH model because of the pandemic. Every working day, she uses VPN software to log in to her company's network through her company allotted device. Then, SSH into a system to monitor its status and spends some time doing that. After that, she opens the local storage server to view the last documentation she edited and continues working on it for the remainder of the day. Emiko also has a habit of logging out of each resource she uses on the network. She does not wait for the auto-logout period. Finally, she disconnects the VPN connection when it is time to wrap up. This is Emiko's normal behavior, and the Zero-Trust implemented network knows it. Let us say that Emiko visits her parents in Nakagawa for a weekend and decides to stay there for longer. On Monday, Emiko tried to log in to her VPN account, but apart from the usual login procedure, she had to go through an additional step of entering a one-time password sent to her work email. Only then was she allowed to enter the network. 
Take another example: Emir works remotely from Istanbul for the same company and has been doing that since the pandemic. Everything seems to be normal until he decides to dual-boot his PC. He logs in normally when the work-day starts, but he cannot log in. Every time he tries to log in, he gets the same alert, Access denied due to policy violation, on each try.
Zero Trust would be responsible for this behavior. It already does not trust anyone, but the moment there is some suspicious behavior, it lets its guards up. If you noticed, in the first scenario, the authentication is taken to one additional step when Emiko tries to log in from a new location because the network has never received a login request from that location from Emiko's device. The company can also enforce verifying her before accessing any resource she usually uses. Similarly, when Emir decided to dual boot his PC, he might have turned off Secure Boot in his BIOS to install that Linux OS or maybe changed the boot device order so he could boot from a USB device. So, if the company's policy is that no one with Secure Boot turned off can log in to the network, Emir's device failed to comply with the application policy set by the company, and thus, was not allowed to log in.</p>
<h3 id="heading-why-is-it-important">Why is it important?</h3>
<p>The Covid pandemic pushed almost every tech company out there to move to a remote model. Employees use VPN to get inside their company's network to access data and resources. Nevertheless, the question arises, is that enough? Sure, it is an excellent solution as long as the number of people using it simultaneously can be counted on the fingers. However, what if the number of people using it is in the hundreds? That is a security disaster. Here is where Zero Trust comes to the rescue.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643135735271/g9RZCVVHQK.png" alt="image.png" /></p>
<p>Zero Trust is not some software or even an algorithm that can be added to a network. It is much more complex than that. Imagine you know how every employee in your organization accesses your network. You know what device they use, the OS they use, kernel and its version, BIOS version and settings, kind of software installed, how fast they type, where they usually access the network from, and much more similar stuff. How do you use this information? You match the collected sample with the live session whenever they try to log in. If anything changes, you challenge the employee with more authentication steps or block them altogether.
This is how the network gives you access. But is that all that Zero Trust is about? No. One more essential feature is that it only gives you access to the resources you essentially need - nothing more, nothing less. Let's say you have been tasked with reviewing documentation for a new software the company is about to release. Hence, the access you will get is to access the file server with only read and comment privileges. You can neither make changes nor download the file to your local system.
When an organization migrates to the Zero Trust architecture, the following advantages can be observed:</p>
<ul>
<li><strong>Reduces organizational risk:</strong> Zero Trust assumes that all applications and services are malicious and are disallowed from communicating until they can be verified. Thus, it reduces risk because it tracks the communication between these assets.</li>
<li><strong>Reduce the risk of data breach:</strong> Because Zero Trust is based on the principle of least privilege, any user in the system will have to prove identity multiple times for every level of access. There is no ability to access the system laterally due to restrictions in the architecture; therefore, even if the attackers gain access to one segment in the network, there is nowhere they can go from there without establishing trust again. Even for an inside actor, the access would be only limited to the task they are assigned with. No one enjoys unlimited access in Zero Trust architecture.</li>
<li><strong>Secures cloud adoption:</strong> Cloud has become very important for enterprises because they help save capital on infrastructure setup. But cybersecurity specialists dread it because they will not be able to have visibility and access control. However, Zero Trust enables the classification of all assets on the cloud to establish the proper protection and access controls.</li>
<li><strong>Support regulatory compliance:</strong> Regulatory compliance like the GDPR, HIPAA, CCPA, or various other compliance regulations are the top concern for organizations. In Zero Trust, identity and payload are verified each time, stopping the data before the attacker reaches the data. This exceeds the compliance requirements of today's regulatory frameworks.</li>
<li><strong>Lowers reliance on endpoint protection:</strong> Endpoints like servers, laptops, desktops, and Point of Sales (POS) devices are frequently targeted by hackers to gain access to internal networks of an organization. Ransomware and malware also find their way into the network through these endpoints. Organizations spend enough capital on protecting these endpoints, but with Zero Trust, reliance on traditional endpoint protection solutions can be avoided with identity at the center of the network security.</li>
<li><strong>Enables hybrid workforce security:</strong> Rapid adoption of remote working style has forced companies to collaborate from anywhere using any device. Zero Trust enables real-time security across all security domains for this scattered workforce.</li>
</ul>
<h3 id="heading-key-principles">Key Principles</h3>
<p>Zero Trust's key feature is least-privileged access, which assumes that no user or application should be inherently trusted. This is the very basics of Zero Trust, but the fundamental principles are as follows:</p>
<ul>
<li><strong>Secure all communication:</strong> Access requests, be in form within the network or beyond it, should meet the same security requirement. No one gets more privilege.</li>
<li><strong>Grant least privilege:</strong> Access should be given the least privilege to complete a given task. We talked about this earlier when I mentioned the permission needed to review a documentation is read and comment. You don't need any more permissions for reviewing.</li>
<li><strong>Grant access to a single resource at a time:</strong> When a user authenticates, they are only authorized for a single asset. If they need to use another resource, they'll need to get authenticated again.</li>
<li><strong>Make access policies dynamic:</strong> Authenticating a user should not depend solely on credentials and static policies. Policies should be dynamic. Policies can take into consideration device analytics, the behavior of the user, or even environmental factors. For example, it could depend on the BIOS settings of the device, the device's location, the charge available on the device, the typing speed of the user in the current session, malware infection, etc.</li>
<li><strong>Monitor security posture for assets:</strong> As a security professional, you'll need to monitor activities like web tampering, suspicious processes, web shells, unusual logons, and other malicious processes in real-time to take action against them immediately.</li>
<li><strong>Collect and use data to improve security posture:</strong> This includes collecting data about user behavior, device status, and location to optimize the authentication service using machine learning.</li>
<li><strong>Periodically re-evaluate trust:</strong> Once in a while, the trust should be re-evaluated to make sure everything is in order and if some resource needs more level of protection than it did before.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643135991818/vzWBMA0iO.png" alt="image.png" /></p>
<h3 id="heading-how-to-apply">How to apply?</h3>
<p>Zero trust might be fascinating in theory, but how do enterprises apply it to their network? In order to establish Zero Trust, one needs to monitor and control the users constantly and the traffic of the network, verify the traffic between any two points in the network and club it with strong multifactor authentication methods such as biometrics or one-time codes.</p>
<ol>
<li><strong>Identify protected resources:</strong> Before implementing Zero Trust architecture, assets, data, and services need to be classified. You need to determine what you will be protecting on priority.</li>
<li><strong>Define policies:</strong> Next, the user's expected behavior should be documented. This step can be accomplished by asking the questions: "<em>Who is going access what, when and why, and from where?</em>"</li>
<li><strong>Identify data feeds:</strong> For implementing this step, we can break it down to a straightforward question: "<em>What sort of data do you need to make the access generating decision?</em>" The data feed could include threat intelligence, activity logs, compliance systems, etc.</li>
<li><strong>Devise the Trust Algorithm:</strong> This algorithm will grant access based on factors like the request, policy, and the data feed. This will also evaluate the behavioral patterns to make access-granting decisions.</li>
<li><strong>Define architecture:</strong> This usually is dependent on the organization's current setup. The architecture will map the monitoring, accessing, and all other aspects of how the network will operate.</li>
</ol>
<h3 id="heading-industry-adoption">Industry Adoption</h3>
<p>Even though Zero Trust architecture is a widely recognized way to mitigate intrusion, its adoption has been slow and inconsistent. A study by <a target="_blank" href="https://www.ibm.com/downloads/cas/OJDVQGRY">IBM</a> in 2021 showed that The average cost of a breach was USD 1.76 million less at organizations with a mature zero-trust approach, compared to organizations without zero trust. Of all the organizations that have started implementing it, most of them are still in an intermediate phase. The survey by IBM revealed that 35% of the participating organizations had partially or fully deployed the Zero Trust model, and of the remaining 65%, 22% were planning to deploy it within a year.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1643137567693/MW8pQX-fU.png" alt="image.png" /></p>
<p>It also stated that $5.04m was the average cost of a breach at organizations without zero trust deployed. It is also true that about 43% of organizations have no plans of implementing Zero Trust.
In May 2021, the Biden administration instructed US Federal agencies to adhere to <a target="_blank" href="https://csrc.nist.gov/publications/detail/sp/800-207/final">NIST 800–207</a> standard for Zero Trust because of the increasing number of high-profile security breaches. As a result, this standard has been subjected to validation and suggestions from various private and government organizations. Hence, it is the go-to standard for private enterprises.</p>
<h2 id="heading-closing-notes">Closing Notes</h2>
<p>Zero Trust is a new way to architect an organization's cyber defense. It provides a collection of concepts, ideas, and component relationships designed to eliminate the uncertainty in enforcing accurate access decisions in information systems and services. Zero Trust's benefits outweigh its implementation cost and any other factors.</p>
]]></content:encoded></item><item><title><![CDATA[Store images on MongoDB]]></title><description><![CDATA[Images have become a crucial part of the internet. It's not just web applications that need images, social media has made sure that users not only consume data but also produce and share them. Applications like WhatsApp, Telegram, and Discord also su...]]></description><link>https://blogs.yasharyan.dev/store-images-on-mongodb</link><guid isPermaLink="true">https://blogs.yasharyan.dev/store-images-on-mongodb</guid><category><![CDATA[Web Development]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Express]]></category><category><![CDATA[mongoose]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Mon, 24 May 2021 03:38:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1621853030875/mAYNz27f5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Images have become a crucial part of the internet. It's not just web applications that need images, social media has made sure that users not only consume data but also produce and share them. Applications like WhatsApp, Telegram, and Discord also support sharing documents. So, as a backend developer, handling images and storing them on the database is a must. For this tutorial, I am assuming that you are fairly good with ExpressJS and can use Mongoose, or at least know how to use the MongoDB native drivers for NodeJS. I am also assuming that your Express Server is already set up with Mongoose, or that you are using the native MongoDB drivers for NodeJS</p>
<h2 id="form-encoding">Form encoding</h2>
<p>When making a <code>POST</code> request, you need to encode the data that is passed along to the backed so that it can be easily parsed. HTML forms provide three methods of encoding:</p>
<ul>
<li><strong>application/x-www-form-urlencoded</strong>: The default mode of encoding. A long string of name-values is created, where each name and value pair is separated by an <code>=</code>, and each pair is separated by an <code>&amp;</code> so that it can be parsed by the server.</li>
<li><strong>multipart/form-data</strong>: This encoding is used when there is a need for files to be uploaded to the server.</li>
<li><strong>text/plain</strong>: They have been introduced as a part of the HTML 5 specification, and are not used widely in general.</li>
</ul>
<h2 id="why-image-handling-is-different-on-express">Why image handling is different on Express?</h2>
<p>When you send form data to the express backend, express is equipped with handling the <code>application/x-www-form-urlencoded</code> and the <code>text/plain</code> encodings, it cannot process the <code>multipart/form-data</code> encoding, which is primarily used for uploading files.  This is where Multer comes in. A node.js middleware that will handle multipart encoded forms for us. </p>
<h2 id="setting-up-your-schema">Setting up your Schema</h2>
<p>You need to define a schema <code>Upload.js</code> for the collection where you are going to store your images. If you are using the native MongoDB drivers, you can skip this part. </p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Upload.js</span>
<span class="hljs-keyword">const</span> mongoose = <span class="hljs-built_in">require</span>(<span class="hljs-string">"mongoose"</span>);

<span class="hljs-keyword">const</span> UploadSchema = <span class="hljs-keyword">new</span> mongoose.Schema({
  <span class="hljs-attr">fileName</span>: {
    <span class="hljs-attr">type</span>: <span class="hljs-built_in">String</span>,
    <span class="hljs-attr">required</span>: <span class="hljs-literal">true</span>,
  },
  <span class="hljs-attr">file</span>: {
    <span class="hljs-attr">data</span>: Buffer,
    <span class="hljs-attr">contentType</span>: <span class="hljs-built_in">String</span>,
  },
  <span class="hljs-attr">uploadTime</span>: {
    <span class="hljs-attr">type</span>: <span class="hljs-built_in">Date</span>,
    <span class="hljs-attr">default</span>: <span class="hljs-built_in">Date</span>.now,
  },
});

<span class="hljs-built_in">module</span>.exports = Upload = mongoose.model(<span class="hljs-string">"upload"</span>, UploadSchema);
</code></pre>
<p>In the above schema, the <code>file</code> block is the most important one, the rest can be conveniently ignored to match your requirements.</p>
<h2 id="setting-up-multer">Setting up multer</h2>
<p>Install Multer for your application: </p>
<h4 id="using-npm">Using npm:</h4>
<p><code>npm i multer</code></p>
<h4 id="using-yarn">Using yarn:</h4>
<p><code>yarn add multer</code></p>
<p>Now, let's create a route that will handle file upload. But before that, let's enable our app to use multer in <code>upload.js</code>.</p>
<pre><code class="lang-js"><span class="hljs-comment">// upload.js</span>
<span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">'express'</span>)
<span class="hljs-keyword">const</span> multer  = <span class="hljs-built_in">require</span>(<span class="hljs-string">'multer'</span>)
<span class="hljs-comment">//importing mongoose schema file</span>
<span class="hljs-keyword">const</span> Upload = <span class="hljs-built_in">require</span>(<span class="hljs-string">"../models/Upload"</span>);
<span class="hljs-keyword">const</span> app = express()
<span class="hljs-comment">//setting options for multer</span>
<span class="hljs-keyword">const</span> storage = multer.memoryStorage();
<span class="hljs-keyword">const</span> upload = multer({ <span class="hljs-attr">storage</span>: storage });
</code></pre>
<p>This snippet makes sure that the file is parsed and stored in the memory. 
<strong>WARNING</strong>: Make sure that you take steps to make sure that the file being uploaded isn't huge, or else you could be looking at a Denial-of-service threat.
There are some options that you can use in multer. Take a look at them <a target="_blank" href="https://github.com/expressjs/multer#multeropts">here</a>.</p>
<h2 id="using-multer-middleware-in-your-route">Using multer middleware in your route</h2>
<p>Now that you have set up multer successfully, it is time to use it as a middleware in your requests. </p>
<pre><code class="lang-javascript">app.post(<span class="hljs-string">"/upload"</span>, upload.single(<span class="hljs-string">"file"</span>), <span class="hljs-keyword">async</span> (req, res) =&gt; {
  <span class="hljs-comment">// req.file can be used to access all file properties</span>
  <span class="hljs-keyword">try</span> {
    <span class="hljs-comment">//check if the request has an image or not</span>
    <span class="hljs-keyword">if</span> (!req.file) {
      res.json({
        <span class="hljs-attr">success</span>: <span class="hljs-literal">false</span>,
        <span class="hljs-attr">message</span>: <span class="hljs-string">"You must provide at least 1 file"</span>
      });
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-keyword">let</span> imageUploadObject = {
        <span class="hljs-attr">file</span>: {
          <span class="hljs-attr">data</span>: req.file.buffer,
          <span class="hljs-attr">contentType</span>: req.file.mimetype
        },
        <span class="hljs-attr">fileName</span>: req.body.fileName
      };
      <span class="hljs-keyword">const</span> uploadObject = <span class="hljs-keyword">new</span> Upload(imageUploadObject);
      <span class="hljs-comment">// saving the object into the database</span>
      <span class="hljs-keyword">const</span> uploadProcess = <span class="hljs-keyword">await</span> uploadObject.save();
    }
  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-built_in">console</span>.error(error);
    res.status(<span class="hljs-number">500</span>).send(<span class="hljs-string">"Server Error"</span>);
  }
});
</code></pre>
<p>You can also use <code>upload.array()</code> instead of <code>upload.single()</code> if you are expecting to receive more than one file from the frontend. More about that <a target="_blank" href="https://github.com/expressjs/multer#usage">here</a>.</p>
<h4 id="code-explanation">Code explanation</h4>
<p>The middleware <code>upload.single("image")</code> is used to tell the server that only one file is being expected from the browser. The argument inside the <code>upload.single()</code> tells the name of the file field in the HTML form. Using this middleware enables us to use <code>req.file</code> inside the route definition to access the received file. We used <code>req.file.buffer</code> and <code>req.file.mimetype</code> to save the file on the database. The <code>buffer</code> is raw binary data of the file received and we'll store it on the database as it is. The <code>req.file.mimetype</code> is also very important for us as it'll tell the browser how to parse the raw binary data, i.e. what to interpret the data as, whether it be a png image or jpeg, or something else. To find out what other information can be accessed from <code>req.file</code>, click <a target="_blank" href="https://github.com/expressjs/multer#file-information">here</a>. We had to break the file object into two properties, namely <strong>data</strong>, which contains the raw binary, and the <strong>contentType</strong>, which contains the mimetype. </p>
<h2 id="sending-data-from-frontend">Sending data from Frontend</h2>
<p>Remember, multer only accepts <code>multipart/form-data</code> for files. That is why we need to set the encoding type to the same on our frontend.</p>
<pre><code class="lang-html"><span class="hljs-tag">&lt;<span class="hljs-name">form</span> <span class="hljs-attr">action</span>=<span class="hljs-string">"/profile"</span> <span class="hljs-attr">method</span>=<span class="hljs-string">"post"</span> <span class="hljs-attr">enctype</span>=<span class="hljs-string">"multipart/form-data"</span>&gt;</span>
  <span class="hljs-tag">&lt;<span class="hljs-name">input</span> <span class="hljs-attr">type</span>=<span class="hljs-string">"file"</span> <span class="hljs-attr">name</span>=<span class="hljs-string">"avatar"</span> /&gt;</span>
<span class="hljs-tag">&lt;/<span class="hljs-name">form</span>&gt;</span>
</code></pre>
<h2 id="how-to-convert-it-back-to-an-image">How to convert it back to an image?</h2>
<p>Well, there are basically two ways you can do this. You either convert the binary data to the image on the backend and then send it to the frontend, or you send the binary data to the frontend and then convert it to an image. It totally depends on your liking and your use case. How to do it? Well, that article is for another WebDev Monday's.</p>
]]></content:encoded></item><item><title><![CDATA[Installing Ubuntu Server 20.04]]></title><description><![CDATA[Introduction
Ubuntu is undoubtedly one of the most popular  Linux Distro out there. It is quite common for someone charting Linux waters to dive into Ubuntu at one point in time, even if it is just to check what it tastes like. But there is not just ...]]></description><link>https://blogs.yasharyan.dev/installing-ubuntu-server-2004</link><guid isPermaLink="true">https://blogs.yasharyan.dev/installing-ubuntu-server-2004</guid><category><![CDATA[Linux]]></category><category><![CDATA[linux for beginners]]></category><category><![CDATA[Ubuntu]]></category><category><![CDATA[server]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Tue, 20 Apr 2021 03:24:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1619232103712/xjfSqzjQU.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="introduction">Introduction</h3>
<p>Ubuntu is undoubtedly one of the most popular  <a target="_blank" href="https://en.wikipedia.org/wiki/Linux_distribution">Linux Distro</a> out there. It is quite common for someone charting Linux waters to dive into Ubuntu at one point in time, even if it is just to check what it tastes like. But there is not just one Ubuntu out there. There are actually five.</p>
<ul>
<li>Ubuntu Cloud </li>
<li>Ubuntu Core</li>
<li>Ubuntu Kylin</li>
<li>Ubuntu Desktop</li>
<li>Ubuntu Server</li>
</ul>
<p>In this blog, I will help you install Ubuntu Server. Let's get started. </p>
<h3 id="why-ubuntu-server">Why Ubuntu Server?</h3>
<p>You might have heard about the Debian-based operating system Ubuntu at some point in your life. But now, what is this Ubuntu Server?</p>
<p>Ubuntu.com defines Ubuntu Server as:</p>
<blockquote>
<p>Ubuntu Server is a variant of the standard Ubuntu you already know, tailored for networks and services. It’s just as capable of running a simple file server as it is operating within a 50,000 node cloud.</p>
</blockquote>
<p>The difference is, Ubuntu Desktop comes with a Desktop Environment along with some other applications that make operating it easier, like a visual file manager, a music player, a web browser, a text editor, but on the other hand, Ubuntu Server does not come with any of this bundled with the installation file all it has, is a terminal window. </p>
<p>If you are unsure if Ubuntu Server is the right choice for you, <a target="_blank" href="https://www.makeuseof.com/tag/difference-ubuntu-desktop-ubuntu-server/">read this article</a>.</p>
<p>GUI can be installed on Ubuntu Server after installing the necessary packages, if that what you are comfortable with, but with it comes additional memory utilization, meaning, your server will have to struggle a little more to run smoothly. So if you are not comfortable with the command-line interface, you can always switch to the GUI version of the server. </p>
<h3 id="hardware-requirement">Hardware Requirement</h3>
<p>Since there is minimal GUI in the default Ubuntu Server installation, the hardware requirements are minimal. But then again, it depends on what kind of server applications you are going to run on it. Some applications demand high resources and might need a system with better configuration to run it, while others might run perfectly on the minimum required system configuration. You also won't need a mouse or even a keyboard and a monitor after you have successfully installed the OS on your system. The OS image file for the Ubuntu server is just 1.1 GB compared to the 2.7 GB for the Ubuntu Desktop, and the minimum hardware requirements are:</p>
<ul>
<li><strong>RAM</strong>: 1 GB</li>
<li><strong>CPU</strong>: 1 GHz</li>
<li><strong>Storage</strong>: 2.5 GB disk space (You'll probably need more)<br /></li>
</ul>
<p>This means that you can easily pick a system from the last decade and turn it into your server.</p>
<h3 id="download">Download</h3>
<p>You can head on to the official <a target="_blank" href="https://ubuntu.com/download/server">Ubuntu website</a> and download the image. The page has three options. We'll be using <em>Option 2: Manual Server Installation</em>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619147210437/ZL8K2_V16.png" alt="Ubuntu Server download page" /></p>
<h3 id="burn-your-disc">Burn your disc</h3>
<p>Now that you have the ISO image file for Ubuntu Server, you need to copy it on a disk to install it on your server system. I usually prefer  <a target="_blank" href="https://rufus.ie/en_US/">Rufus</a> to make a bootable device since it allows me to burn the image to a flash drive as well. But there is nothing wrong if you still want to stick to the traditional CD-ROM.</p>
<h3 id="installation">Installation</h3>
<p>Now that all your pre-installation needs have been fulfilled, let's dive into the installation process.</p>
<h4 id="step-1-booting-in">Step 1: Booting In</h4>
<p>Plug your device into your USB port (or if you are using a CD-Drive, put it in your optical disk drive) and restart your PC. You should reach a page that looks somewhat like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619179269105/LSFF51aRC.png" alt="Intro Install Page of Ubuntu Server).png" /></p>
<p>If that does not happen, you might want to refer to this <a target="_blank" href="https://www.easeus.com/backup-recovery/bootable-usb-drive-not-showing-up-or-recognized.html">article</a>. There may not be anything wrong with your PC, rather just some security settings enabled in the BIOS.</p>
<h4 id="step-2-language-selection">Step 2: Language Selection</h4>
<p>You'd now be on the installation page of the Ubuntu server. As discussed before, the Ubuntu server has an insubstantial GUI presence, and your installation experience would be similar. You would not be able to use your mouse on Ubuntu Server installer. It is how it is. All you need is your keyboard and your screen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619180596986/SgzCkcuvp.png" alt="Language selection" /></p>
<p>Select your preferred language, navigate and confirm it using your keyboard's <code>Up</code>, <code>Down</code>, and <code>Enter</code> keys.</p>
<h4 id="step-3-keyboard-configuration-selection">Step 3: Keyboard Configuration Selection</h4>
<p>Select the correct keyboard configuration setting based on the type of keyboard you are using. In most cases, it is automatically detected, but if it does not, you can always select whichever suits you. If you are unsure, you can go with the default options and then change them when you are sure, after the installation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619181069465/ckUUtRzc-.png" alt="Keyboard configuration selection" /></p>
<h4 id="step-4-network-connection-settings">Step 4: Network Connection Settings</h4>
<p>You'll now need to configure the network settings for your server. The default setting is automatically applied using DHCP, but you might want to use something else for setting up a server with a public static IP address. If you are unsure, you would want to consult with your network administrator. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619181804842/YmFjVJnGY.png" alt="Network configuration settings" /></p>
<h4 id="step-5-configuring-proxy">Step 5: Configuring Proxy</h4>
<p>If you or your organization uses a proxy for accessing the internet, this is the time you set it up. If you are unsure about this step, you can skip it. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619182211368/2bEhtIxiG.png" alt="Proxy Setup" /></p>
<h4 id="step-6-configure-ubuntu-archive-mirrors">Step 6: Configure Ubuntu Archive Mirrors</h4>
<p></p><p>Mirror sites or mirrors are replicas of other websites or any network node.</p> Using this, you can select a replica that is closer to your physical location for faster download speeds. The best mirror might already have been selected by the installation software, but you can always change it if you are not satisfied.<p></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619200993307/juP4XLDP5.png" alt="Ubutnu package Mirroring" /></p>
<p><em>You can read more about Mirroring <a target="_blank" href="https://wiki.ubuntu.com/Archive/Mirroring#:~:text=Ubuntu%20Mirror%20System,release%2Dcd%2Donly%20mirrors">here</a>.</em> <br />
<em>Read how you can get the fastest download speeds for your packages using the correct mirror <a target="_blank" href="https://linuxconfig.org/how-to-select-the-fastest-apt-mirror-on-ubuntu-linux">here</a></em>.</p>
<h4 id="step-7-storage-configuration">Step 7: Storage Configuration</h4>
<p>Now you need to set up your system's disk for installation. You need to configure it to use a part of, or the entire disk. I am selecting the entire disk, but if you prefer to dual boot or use only some some part of your disk for some other purpose, you can easily use the <em>Custom Storage Layout</em> option. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619198927991/N4-R5LVNE.gif" alt="Storage Configuration" /></p>
<p>You can set up LVM (Logical Volume Manager) on your system to better manage your storage space. LVM is like a dynamic partition. You can create, resize, or even delete the partition, right from your terminal window, all while you are logged in, without the need to reboot the system. You can pop in more storage drives, and LVM will span over it.
Read <a target="_blank" href="https://blog.vpscheap.net/when-to-use-lvm/">this article</a> if you are unsure if LVM is the right choice for you.</p>
<p>After selecting <strong>Done</strong>, you'll be taken to this page: 
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619201121379/D5ALUCpnx.png" alt="Storage Configuration setup partition" />
Here you can partition your drive accordingly and set up <a target="_blank" href="https://unix.stackexchange.com/a/12086">mount points</a>. When you are done, you can proceed by pressing Done. A popup will show on your screen claiming this:
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619202581004/qOT7iorW4.png" alt="Storage Configuration Dialog" />
It is totally normal, and if you have partitioned your disk properly, you can proceed by pressing Continue. 
Do keep in mind that you will not be able to return to this page or any previous installation page after this step. </p>
<h4 id="step-8-profile-setup">Step 8: Profile Setup</h4>
<p>The installation has begun, but your part is not over yet. You need to create a user through which you can log in to the system. 
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619203048165/izwLYaR40.png" alt="Profile Setup Page" />
Enter your name, and then create a username followed by the password and your computer's hostname. The hostname will be the one with which your server will communicate with other computers. You can choose one which you can easily identify. When you are satisfied, press <strong>Done</strong>.</p>
<h4 id="ssh-setup-and-server-snaps">SSH Setup and Server Snaps</h4>
<p>If you want to access your server remotely, you might want to install the OpenSSH server. 
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619206742742/F0kq7buIE.png" alt="OpenSSH installation" />
Select the <em>Install OpenSSH server</em> option by pressing <code>Space</code> when the option is highlighted and then press Done. You'll now be redirected to a page where you can install additional packages for your server. Choose whatever you like. You don't need to select one. You can altogether skip this step and press Done if you are not sure. 
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619207145546/yK30eIT4R.png" alt="Ubuntu Server Snaps" />
When you press Done, you'll be able to see the installation logs. Please wait for it to finish, and then select Reboot Now. Your system will restart, and you'll be able to see the Ubuntu Server Login prompt.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1619207855490/4hTJUoBq1.png" alt="Installation Logs" />
If you have selected to install OpenSSH and other server snaps, you will not see the reboot option. You'll see the option 'Cancel update and reboot'. You will have to wait for those packages to be installed. The duration depends on your snap(s) size and your network connection. If you are not the patient type, you can go ahead and cancel the installation and proceed anyway. .'</p>
<h3 id="congratulations">Congratulations</h3>
<p>Amazing, you just installed Ubuntu Server on your computer. A lot of amazing things await. You can install a reverse proxy package, a C-panel, host your websites, or even make an SMB server. Google your way to it, or wait for my next article.</p>
]]></content:encoded></item><item><title><![CDATA[Step-by-Step Guide: Turn Your Raspberry Pi into a Local Server]]></title><description><![CDATA[Did you buy a new Raspberry Pi and want to set up a local home server on it, and don't know how to set it up? Well, you are viewing the correct blog post. When I got a Raspberry Pi, I had no idea what I would do with it apart from using it as a WiFi ...]]></description><link>https://blogs.yasharyan.dev/step-by-step-guide-turn-your-raspberry-pi-into-a-local-server</link><guid isPermaLink="true">https://blogs.yasharyan.dev/step-by-step-guide-turn-your-raspberry-pi-into-a-local-server</guid><category><![CDATA[Raspberry Pi]]></category><category><![CDATA[server]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Wed, 13 Jan 2021 19:03:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1610564573565/AJrTq_Cj3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Did you buy a new Raspberry Pi and want to set up a local home server on it, and don't know how to set it up? Well, you are viewing the correct blog post. When I got a Raspberry Pi, I had no idea what I would do with it apart from using it as a WiFi router. But as I kept experimenting, I realized that there was too much I could do with it. In this article, I will guide you through the process of setting up a home server on your Raspberry Pi.</p>
<h2 id="heading-my-case">My case</h2>
<p>I live at my university's hostel. The WiFi reception in my block, and especially my room, is horrible, so we have an RJ45 port in our room. A restriction imposed on my campus network was that only three devices could be registered, and out of those, only one can be connected at a time. But the problem was that I had 4 devices that needed to be connected to the internet, sometimes, simultaneously. I know, it's a bit intricate. One more thing that makes it a pain is that there is a captive portal involved. So I have to login every time I switch devices.</p>
<h2 id="heading-solutions-i-thought-of">Solutions I thought of</h2>
<p>Before I thought of getting a Raspberry Pi, I thought of other methods. One way was to connect my PC to the ethernet port and then turn on the hotspot. Easy, right? But this would mean that I had to keep my PC turned on throughout the day, and laptop batteries aren't cheap. This also meant that I had to say goodbye to my laptop's portability.</p>
<p>Another solution was to buy a router and connect it to the port. I was almost going to buy this one, but I wanted more features, and I wanted a good enough signal for my room, but not for the person who lived across the hallway. A laptop hotspot was good for this, but you obviously know the limitations there. I also wanted a router with NAS support. Well, there goes my budget. Then I came across Raspberry Pi. A mini-computer that ran Linux and is fully programmable. I had found the perfect toy for myself.</p>
<h2 id="heading-setup">Setup</h2>
<p>I ended up purchasing a Raspberry Pi 4 Model B. It has 4 GB RAM, a Gigabit Ethernet Chip, 2 USB 2.0 ports, and 2 USB 3.0 ports. You can buy it <a target="_blank" href="https://www.thingbits.in/products/raspberry-pi-4-model-b-4-gb-ram">here</a>. I connected my Raspberry Pi to the ethernet port using an RJ45 cable, connected it to a power source, and connected my external HDD to one of the USB ports (for making a NAS). I could have used the <a target="_blank" href="https://www.raspberrypi.org/software/">Raspberry Pi OS</a>, but I wanted something light. So I went ahead with <a target="_blank" href="https://ubuntu.com/download/raspberry-pi">Ubuntu Server</a>.</p>
<h2 id="heading-getting-started">Getting started</h2>
<p>For this tutorial, I am assuming that your distro is up-to-date and the device is running the most recent stable version of Ubuntu. For the time, you either need to connect your Pi to an external display, or you can connect to it using VNC. You can also SSH into it if they are connected to the same network. So now, let's get started.</p>
<h3 id="heading-setting-up-the-server">Setting up the server</h3>
<p>Now that everything is understood, let us start with setting up the server.</p>
<h3 id="heading-setting-up-the-r-pi-hostname">Setting up the R-Pi hostname</h3>
<p>You can reach your Raspberry Pi on your local network using the URL <code>raspberrypi.local</code>. But you can change this to anything, like <code>myserver.local</code> or <code>mypi.local</code> by editing the <code>/etc/hosts</code> file. More on that <a target="_blank" href="https://www.slicethepi.co.uk/modify-host-file/">here</a>. Once you do this, you are good to go.</p>
<h3 id="heading-setting-up-your-pi-as-an-ap">Setting up your Pi as an AP</h3>
<p>Connect your ethernet cable to the Pi and the Port. The next step would be installing software called <code>raspap-webgui</code> which helps set up your wireless AP.</p>
<blockquote>
<p>RaspAP lets you quickly get a WiFi access point up and running to share the connectivity of many popular Debian-based devices, including the Raspberry Pi. Our popular Quick installer creates a known-good default configuration that "just works" on all current Raspberry Pis with onboard wireless. A responsive interface gives you control over the relevant services and networking options. Advanced DHCP settings, OpenVPN client support, SSL, security audits, themes, and multilingual options are included.</p>
</blockquote>
<p>Head over to <a target="_blank" href="https://github.com/billz/raspap-webgui#quick-installer">github</a> to learn how to install it on your device.</p>
<p>After the installation is done, you can see <code>raspi-webgui</code> as a wireless AP on your wireless-enabled devices. You can also find the default password to the AP and the router settings <a target="_blank" href="https://github.com/billz/raspap-webgui#quick-installer">here</a>. Your router's default IP will be <code>10.3.141.1</code>. You can access your router settings by going to http://10.3.141.1 and using the default password and username provided in the readme of the GitHub repository.</p>
<p>You have successfully created a wireless AP using your raspberry at this stage.</p>
<h3 id="heading-hosting-websites-on-your-pi">Hosting websites on your PI</h3>
<p>Now that you are connected to your Pi's network let us make it more usable. I have my personal journal set up on my device. It can only be accessed if I am connected to the raspi-webgui AP. You can find it <a target="_blank" href="https://github.com/canaryGrapher/Open-Journal">here</a>. To set this up, I installed <a target="_blank" href="https://www.nginx.com/">Nginx</a>. Using this, I can set up multiple websites on my Pi. Since I already have raspberrypi.local as my Pi's hostname, I can set up multiple sub-domains to host different websites. If you do not know how to set up Nginx, you can take a look <a target="_blank" href="https://www.digitalocean.com/community/tutorials/understanding-the-nginx-configuration-file-structure-and-configuration-contexts">here</a>.</p>
<p>To make sure your Nginx is installed correctly, browse to raspberry.local while connected to the newly created AP. You should see a page that looks like this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1610539791864/ppidHwL5z.png" alt="Nginx home screen" /></p>
<p>Let's say that I set up Nginx to use the journal as the sub-domain to host my journal website, and I can enter journal.raspberrypi.local in my browser's address bar while connected to the <code>raspi-webgui</code> to view my website. I can set up more websites on this server by creating more Nginx configuration files and using up more sub-domains.</p>
<h3 id="heading-setting-up-smb-on-the-pi">Setting up SMB on the Pi</h3>
<p>I have a lot of study materials, movies, photos, and videos on my external HDD. Whenever I come to my room, I first have to turn on my laptop and then connect it to my hard drive to access those files. If I want to lie down and use my phone to watch some video, I first have to transfer those to my phone and then watch it. I know. It does not sound like too much work, but I am lazy. I like to do things with as few steps as possible. Maybe that's why I am bad at chess. Getting back to the discussion, I connected my hard drive to the Pi and installed Samba to access the files on the drive on any device whenever I am connected to my Pi AP. <a target="_blank" href="https://pimylifeup.com/raspberry-pi-samba/">This guide</a> provides a straightforward method to install Samba on your Pi. Do remember to mount your drive before you proceed. You can mount as many drives on your Pi and access it on any device connected to your AP.</p>
<h3 id="heading-what-can-you-do-next">What can you do next?</h3>
<p>That was it for this tutorial. But that is not the end of possibilities. These are some other things you can add to your server.</p>
<ul>
<li><p>Using the temperature and humidity sensor to get the weather report.</p>
</li>
<li><p>Control some IoT devices using your Pi connected on the same network.</p>
</li>
<li><p>Setting up a hostel room security service.</p>
</li>
<li><p>Installing Alexa on your Pi.</p>
</li>
<li><p>You can go ahead and contact your ISP to provide you with a <strong>static public IP address</strong> so that you can access your Pi from anywhere in the world. (This might be risky. If you do not have the skills to make your server secure, you might be putting every device in your local network at risk) The list of things you can do with this setup is endless. It's all based on how creative you get.</p>
</li>
</ul>
<h1 id="heading-thanks">Thanks</h1>
<p>Hope you found this article informative and intriguing. If it helped you, do give it a like 🧡 and share it with people who might find it useful. Also check out my <a target="_blank" href="https://www.instagram.com/encodable/">Instagram</a> and <a target="_blank" href="https://www.facebook.com/enc0dable">Facebook</a> pages for more content.</p>
<p><a target="_blank" href="https://www.buymeacoffee.com/yasharyan"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Gitignore still commiting the ignored files/folder?]]></title><description><![CDATA[Understanding the 'why'
If you are new to Git, and you have just discovered the .gitignore file. It is a simple implementation. Just add file names or extensions to the file and your files will be omitted. But, it is not always so easy.
What is happe...]]></description><link>https://blogs.yasharyan.dev/gitignore-still-commiting-the-ignored-filesfolder</link><guid isPermaLink="true">https://blogs.yasharyan.dev/gitignore-still-commiting-the-ignored-filesfolder</guid><category><![CDATA[Git]]></category><category><![CDATA[version control]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Sat, 26 Dec 2020 17:50:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1609437143517/oecp4KcEF.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="understanding-the-why">Understanding the 'why'</h2>
<p>If you are new to Git, and you have just discovered the <code>.gitignore</code> file. It is a simple implementation. Just add file names or extensions to the file and your files will be omitted. But, it is not always so easy.</p>
<h3 id="what-is-happening">What is happening?</h3>
<p>So you have set up your project and initialized Git and already halfway through. But now you remember that you forgot to exclude your <code>node_modules</code> folder in the <code>.gitignore</code> file. No worries. Just add the <code>node_modules</code> entry in the <code>.gitignore</code> file and push it to GitHub. Simple, isn't it?
Well, not quite. If you open GitHub and see the code pane, you can still see the <code>node_modules</code> directory over there. But, how can this be? Where did you go wrong? Did GitHub mess up? Check your <code>.gitignore</code> file again, but you will not find anything wrong with it. It still looks like this:    </p>
<pre><code class="lang-gitignore">node_modules
env
logs
*.log
yarn-debug.log*
yarn-error.log*
...
</code></pre>
<p>There still is a <code>node_module</code> entry, which means that you entered it correctly, and the saving process was a success. So what could the issue be? Is <code>.gitignore</code> being adamant?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1609437108794/n4bE3U6GS.jpeg" alt="Sample code screenshot" /></p>
<h3 id="why-is-this-happening">Why is this happening?</h3>
<p>Well, the <code>.gitignore</code> file is doing its job perfectly. The git is at fault here. But don't just start pointing fingers. There must be a very good reason for this.</p>
<blockquote>
<p><code>.gitignore</code> will prevent untracked files from being added to the set of files tracked by git, however, git will continue to track any files that are already being tracked.</p>
</blockquote>
<p>So, in simple words, <code>.gitignore</code> does not affect the files that are already indexed. Since the <code>node_modules</code> folder was already added onto the git index on the first commit, it won't run through <code>.gitignore</code> again. It'll only check for newly created files, not the existing ones, thus, the <code>node_modules</code> folder will never be removed, if you don't do anything about it.</p>
<p>Now, index, in a way, the git index is like a cache for your project. It is the intermediate level between your local project files and the commits to your repository. To clear this cache, we need to get rid of it, so that project can be staged for the next commit.</p>
<h2 id="knowing-the-how">Knowing the 'how'</h2>
<p>Now that we know why this problem is happening, let's fix it.</p>
<h3 id="clearing-your-staged-project">Clearing your staged project</h3>
<p>To untrack a single file that has already been added/initialized to your repository, i.e., stop tracking the file but not delete it from your system use: </p>
<pre><code class="lang-git">git rm --cached filename
</code></pre>
<p>This is useful when you have just added one file to the <code>.gitignore</code> file.</p>
<p>To untrack every file that is now in your .gitignore:
First, commit any outstanding code changes, and then, run this command:</p>
<pre><code class="lang-git">git rm -r --cached .
</code></pre>
<p>This removes any changed files from the index (staging area). This comes handy when you have added more than one file to the <code>.gitignore</code> file.</p>
<h1 id="thanks">Thanks</h1>
<p>Hope you found this article informative and intriguing. If it helped you, do give it a like 🧡 and share it with people who might find it useful. Also check out my  <a target="_blank" href="https://www.instagram.com/encodable/">Instagram</a> and <a target="_blank" href="https://www.facebook.com/enc0dable">Facebook</a> pages for more content.
<a href="https://www.buymeacoffee.com/yasharyan" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" /></a></p>
]]></content:encoded></item><item><title><![CDATA[How to create an AWS EC2 instance?]]></title><description><![CDATA[What is AWS?
If you know what Infrastructure-as-a-Service (IaaS) means, you probably would know of AWS, Amazon Cloud Service. IaaS lets users sublet virtual networks, machines, storage, and servers, etc. AWS offers a suite of cloud-computing services...]]></description><link>https://blogs.yasharyan.dev/how-to-create-an-aws-ec2-instance</link><guid isPermaLink="true">https://blogs.yasharyan.dev/how-to-create-an-aws-ec2-instance</guid><category><![CDATA[AWS]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[ec2]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Sat, 19 Dec 2020 04:50:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1608408901667/3F0mK9oMv.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="what-is-aws">What is AWS?</h2>
<p>If you know what Infrastructure-as-a-Service (IaaS) means, you probably would know of AWS, Amazon Cloud Service. IaaS lets users sublet virtual networks, machines, storage, and servers, etc. AWS offers a suite of cloud-computing services that provides an on-demand computing platform. It is one of the best services you can find to deploy different applications to the cloud. </p>
<p><img src="https://miro.medium.com/max/1364/0*2ui893KAwAT_F9wz.gif" alt="AWS services" /></p>
<p><a target="_blank" href="https://aws.amazon.com/what-is-aws/">This</a> AWS webpage explains in short what AWS is:</p>
<blockquote>
<p>Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform, offering over 175 fully-featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster.</p>
</blockquote>
<p>Cloud computing has become necessary for businesses that provide a flexible, cost-effective, and on-demand storage service. Businesses need not set up high-maintenance, expensive servers for their need. All they need to do is rent it.
Now, you'd be thinking, why should I choose AWS over others?</p>
<ul>
<li>AWS features a pay-as-you-go model, so you only pay for what you use.</li>
<li>AWS has been around the longest, right from 2006. </li>
<li>There are 24 regions and 77 availability zones globally.</li>
<li>Choice of Intel, AMD, and Arm-based processors</li>
<li>They recently added support for macOS, the first, and to date, the only cloud provider to do that.</li>
<li>Some of the major services AWS provides include Amazon Cloud Front, Amazon Elastic Compute Cloud (EC2), Amazon Relational Database Service (Amazon RDS), Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), Amazon Simple Storage Service (Amazon S3), Amazon SimpleDB, and Amazon Virtual Private Cloud (Amazon VPC). With so many options to choose from, it has become a popular choice among multiple developers and users.</li>
</ul>
<h2 id="what-is-ec2">What is EC2?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608455514821/ae41HQMU_.jpeg" alt="AWS EC2 service" />
The Amazon Elastic Compute Cloud, also known as EC2, is a web service that provides secure, resizable compute capacity in the cloud. These servers, commonly known as the Instances, allow developers to access the global AWS data centers' compute capacity. These instances are virtual instances of whatever machine configuration you want to run on the cloud. You can have a Ubuntu Server running with 8 gigs of storage and 4 gigs of memory, or a Windows Server running 8 gigs of memory and a 50 gigs storage. It's all customizable.</p>
<p>In this article, we are going to set up an EC2 Ubuntu instance, free of cost. Learn more about the  <a target="_blank" href="https://aws.amazon.com/free/">AWS Free Tier</a>.</p>
<h2 id="how-to-create-an-aws-account">How to create an AWS account?</h2>
<p>Before we launch an EC2 instance, you'll need an AWS account. First, open up https://aws.amazon.com/ and click on the <strong>Create an AWS Account</strong> button on the top right corner of the web page. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608461219148/xuD2XVrOC.png" alt="AWS Homepage" /></p>
<p>You create an account by filling out all the fields on this page and following through. It is a pretty straightforward process. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608463198565/iIsBdevW_.png" alt="Signup Form" />
As you continue filling the form, you'll be asked to fill in contact details, billing details (don't worry, Amazon will charge you just ₹1 for verification purposes), followed by phone verification. Finally, you can choose a plan for your account among the Basic, Developer, and Business options. The free tier is included in the Basic plan.</p>
<p>Do remember to change the account type to Personal on the Contact Information page. After you are done, you can sign in to your AWS Console. If you are stuck somewhere, you can refer to the video below. <em>(Click on it)</em>
<a target="_blank" href="http://www.youtube.com/watch?v=v3WLJ_0hnOU"><img src="https://d2908q01vomqb2.cloudfront.net/22d200f8670dbdb3e253a90eee5098477c95c23d/2017/11/16/AWSKnowledgeCenter_800x400.png" alt="How to create an AWS account" /></a>
Or, if you are more of a reading person, you can refer to <a target="_blank" href="https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/">this</a> article.</p>
<h2 id="how-to-set-up-an-ec2-instance">How to set up an EC2 instance?</h2>
<p>Login to your <a target="_blank" href="https://console.aws.amazon.com/">AWS Console</a> with the account you just made (or an existing one).
You should have reached a page that looks like this:
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608464652620/o4G4b7rzO.png" alt="AWS Console" />
You can see the <a target="_blank" href="https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:">Launch a virtual machine</a> option in the <strong>Build a solution</strong> container. This is what we will be setting up. </p>
<h3 id="step-1-selecting-an-ami">Step 1: Selecting an AMI</h3>
<p>After you have clicked on the <a target="_blank" href="https://console.aws.amazon.com/ec2/v2/home?#LaunchInstanceWizard:">Launch a virtual machine</a> link, you'll be redirected to a page that lets you choose an AMI (Amazon Machine Image), or in amateur words, an Operating System. There are so many options to choose from, including the recently included macOS. But do keep in mind that <em>not all AMIs will fit your free tier plan</em>. Only those with a <code>Free tier eligible</code> banner below them are included for the 12-month free period. I'll be choosing the <em>Ubuntu Server 20.04 LTS</em> AMI for this tutorial, but you are free to choose any. If you pay close attention, you'll also see an option to choose the AMI architecture you'll be using. You can either use a 64-bit ARM or a 64-bit x86 architecture. I will stick to the default x86 option and click on the blue <strong>Select</strong> button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608466042799/gQ6gOF696.png" alt="Creating an AMI" /></p>
<h3 id="step-2-your-instance-type">Step 2: Your instance type</h3>
<p>Next, you'll be able to customize the AMI according to needs. The first thing that comes after selecting an AMI is choosing an instance type. There are so many options here, but I will be selecting <code>t2.micro</code>. You can see the details about a particular type of instance in the fields adjacent to it, in the same row. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608467192103/ZEMwvfubV.png" alt="Selecting an instance type" /></p>
<p>Click on the <code>Next: Configure Instance Details</code> button on the bottom right of the page.</p>
<h3 id="step-3-configuring-your-instance">Step 3: Configuring your Instance</h3>
<p>On the Configure Instance Details page, you can set up your AMI according to your needs. I will leave everything to default, but you can change the settings if you are familiar with the settings. You can read about what each option does by hovering over the <strong><code>i</code></strong> button next to the label.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608467552996/mA8w5GelU.png" alt="Configuring the instance" /></p>
<p>After making the changes (if you do any), click on the <code>Next: Add Storage</code> button on the bottom right.</p>
<h3 id="step-4-allocating-disk-space-to-your-ami">Step 4: Allocating disk space to your AMI</h3>
<p>Now, let's do the storage allocation. When the <strong>Add Storage</strong> window opens, you'll have an option to configure the storage size, the type of storage, and the encryption settings on the AMI. You can add multiple storage volumes on your Machine Image, but the total cannot exceed 30 GB for free-tier users. </p>
<blockquote>
<p>Free tier eligible customers can get up to 30 GB of EBS General Purpose (SSD) or Magnetic storage.  <a target="_blank" href="https://aws.amazon.com/free/">Learn more</a>  about free usage tier eligibility and usage restrictions.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608483350860/W0tqi09c-.png" alt="Configuring Storage on AWS" /></p>
<p>I am leaving the storage setting on this page as default as well, giving 8 GB of storage to my Ubuntu Machine, which in my view is more than enough if I am not running a website that requires to store a lot of data. After you are done setting up your storage volumes, click on the <code>Next: Add Tags</code> button on the bottom right.</p>
<h3 id="step-5-adding-tags">Step 5: Adding Tags</h3>
<p>The next step is adding tags to your AMI. Tags are used to label an AWS resource. A tag consists of a key-value pair which helps in identifying the resource. More about that over  <a target="_blank" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html">here</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608484233633/nMlM5eVO3.png" alt="Adding tags to your AMI" /></p>
<p>I won't be adding any changes here because I will not be needing to label resources for this tutorial. Click on the <code>Next: Configure Security Group</code> button to proceed.</p>
<h3 id="step-6-configuring-security-settings">Step 6: Configuring Security Settings</h3>
<p>Configuring Security Settings is a crucial setting you might not want to miss while creating an instance. By default, port 22 is open for SSH access, but I will also open port 80 for HTTP, and port 443 for HTTPS so that my website (which I will set up in a follow-up article) can be viewed. You can also open any custom port according to your needs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608484904127/GLaAMSB1q.gif" alt="Setting up Security Settings" /></p>
<p>In the source field, set the dropdown to <em>anywhere</em> so that you can access your machine from anywhere in the world. However, if you have a static IP and you want only a computer with that IP to SSH into that AMI, you can set it up to match that IP address, by selecting the <em>Custom</em> option from the dropdown.</p>
<p>After you are done setting up the security group, click on the blue-colored <code>Review and Launch</code> button on the bottom right. We are almost done.</p>
<h3 id="step-7-launching-your-ami">Step 7: Launching your AMI</h3>
<p>Now, you can review all the configurations you have chosen for your AMI. If you see nothing wrong with the configurations, you can click on the <code>Launch</code> button at the bottom-right of the screen. 
After you do that, you will be asked to create a new or select an existing key pair so that you can access your AMI using an SSH client. You need to keep this file safe.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608489398376/LJRDxwIuM.gif" alt="Creating new key-pair" /></p>
<p>Click on <code>Launch Instance</code> after you are done. If you created a new key-pair, your file download will start and the instance will begin to be created.</p>
<h3 id="step-8-your-ec2-dashboard">Step 8: Your EC2 Dashboard</h3>
<p>You can now see the launch status on your screen. Click on the <code>View Status</code> button at the bottom of the screen. You will be redirected to the <strong>EC2 Dashboard</strong> from where you can manage your instance and find details about it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1608490190059/xnPVBMjiy.png" alt="EC2 Dashboard" /></p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>Well, you have successfully created your first EC2 AMI. Congratulations. Yay! 
You are free to experiment with the EC2 dashboard. Explore different tabs and Google your way around. Remember to Stop (not terminate) your instance when you are not using it to save credits.</p>
<p>I will create a follow-up article explaining how to connect with your instance and serve multiple web applications using Nginx soon. Watch out.</p>
<h1 id="thanks">Thanks</h1>
<p>Hope you found this article informative and intriguing. If it helped you, do give it a like 🧡 and share it with people who might find it useful. Also check out my  <a target="_blank" href="https://www.instagram.com/encodable/">Instagram</a> and <a target="_blank" href="https://www.facebook.com/enc0dable">Facebook</a> pages for more content.
<a href="https://www.buymeacoffee.com/yasharyan" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Get global variables in React JS]]></title><description><![CDATA[Let's suppose you are making an app that makes a lot of requests to the backend at http://localhost:3000. Now you move to production build, and the URL is changed to https://yasharyan.com after hosting it on whatever hosting service you are using. Im...]]></description><link>https://blogs.yasharyan.dev/get-global-variables-in-react-js</link><guid isPermaLink="true">https://blogs.yasharyan.dev/get-global-variables-in-react-js</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[React]]></category><category><![CDATA[Tutorial]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Mon, 07 Dec 2020 17:22:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1607361871274/bxhdj9LFS.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1607361823535/-67IB0Rib.jpeg" alt="React Logo" />
<br />
Let's suppose you are making an app that makes a lot of requests to the backend at <code>http://localhost:3000</code>. Now you move to production build, and the URL is changed to <code>https://yasharyan.com</code> after hosting it on whatever hosting service you are using. Imagine the pain of going to every line you have used the localhost URL and changing it to the new URL. Wouldn't it be nice if you would have coded smart and used a global variable to just type this: <br />
<code>${URL}/server-time-up/UTC/</code> <br /><br />
There are so many ways you can create Global Variables in React JS that can be accessed in every component of your web app.  I am going to tell you about the most common and easy ones. <br /><br /></p>
<h1 id="using-contextapi">Using ContextAPI</h1>
<p>Context provides a way to pass data through the component tree without having to pass props down manually at every level. The best part about it is that there is no need for external dependencies to be installed. Context is bundled with React. It has a simple approach. Let's say you have many components that want to use a common variable. Rather than keep passing it as a prop from the parent component, it can easily be imported from the Context.</p>
<h3 id="how-to-do-it">How to do it?</h3>
<p>For easily readable code, it's better to create a <code>context</code> directory in the <code>src</code> folder of your react app. Then create a <code>CONTEXT_NAMEContext.js</code> file in that folder.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1607302710198/4zSY3AnYo.png" alt="Sample directory structure for context" />
There is no restriction on how many contexts you can have in a project. In fact, you can have a dedicated context for each functionality your app wants to use Context for. The code for creating a Context looks like this:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React, { createContext, useState } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> SampleContext = createContext()
<span class="hljs-keyword">const</span> SampleContextProvider = <span class="hljs-function">(<span class="hljs-params">props</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> [variableOne, setVariableOne] = useState(<span class="hljs-string">'somethingRandom`)
    const Url = "http://localhost:3000"
    return (
         &lt;SampleContext.Provider 
            value={{
                variableOne,
                Url
             }}&gt;
               {props.children}
         &lt;/SampleContext.Provider&gt;
    )
}
export default SampleContextProvider</span>
</code></pre>
<p>Notice that all the variables (and even functions) that need to be made global are passed down as <code>values</code> in the return statement. Now that the Context was exported, it's time to import it into the components. First, go to your App.js file and wrap all the components you want to access the context. All child components will automatically inherit the context. </p>
<pre><code class="lang-js">   <span class="hljs-keyword">import</span> React, { Fragment } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>
   <span class="hljs-keyword">import</span> Component_One <span class="hljs-keyword">from</span> <span class="hljs-string">'./Component_One'</span>
   <span class="hljs-keyword">import</span> Component_Two <span class="hljs-keyword">from</span> <span class="hljs-string">'./Component_Two'</span>
   <span class="hljs-keyword">import</span> Component_Three <span class="hljs-keyword">from</span> <span class="hljs-string">'./Component_Three'</span>
   <span class="hljs-keyword">import</span> SampleContextProvider <span class="hljs-keyword">from</span> <span class="hljs-string">'../contexts/SampleContext'</span>
   <span class="hljs-keyword">const</span> mainComponent = <span class="hljs-function">() =&gt;</span> {
      <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">Fragment</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>This is a sample component<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
            <span class="hljs-tag">&lt;<span class="hljs-name">SampleContextProvider</span>&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">Component_One</span> /&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">Component_Two</span> /&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">Component_Three</span> /&gt;</span>
            <span class="hljs-tag">&lt;/<span class="hljs-name">SampleContextProvider</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">Fragment</span>&gt;</span></span>
      )
   }
</code></pre>
<p>Notice how all imported components were wrapped with <code>&lt;SampleContextProvider&gt;</code>? All these components now have access to all the values in the context. To access (consume) them, you'll have to do the following:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React, { Fragment, useState, useContext } <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>
<span class="hljs-keyword">import</span> SampleContext <span class="hljs-keyword">from</span> <span class="hljs-string">'../contexts/SampleContext'</span>
<span class="hljs-keyword">import</span> axios <span class="hljs-keyword">from</span> <span class="hljs-string">'axios'</span>
<span class="hljs-keyword">const</span> Component_One = <span class="hljs-function">() =&gt;</span> {
    <span class="hljs-keyword">const</span> { variableOne, Url } = useContext(SampleContext)
    <span class="hljs-keyword">const</span> [getServerTimeUp, setServerTimeUp ] = useState()
    axios.get(<span class="hljs-string">`<span class="hljs-subst">${Url}</span>/server-time-up/UTC/`</span>)
    .then(<span class="hljs-function"><span class="hljs-params">res</span> =&gt;</span> {
       setServerTimeUp(res.data.time)
    }
    <span class="hljs-keyword">return</span> (
        <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">Fragment</span>&gt;</span>
             <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>This is the value of variableOne: {variableOne}<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
             <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>{getServerTimeUp}<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">Fragment</span>&gt;</span></span>
    )
}
</code></pre>
<p>This way, you can globally set and get variables in whatever component you need.
<br /><br /></p>
<h1 id="using-env-file">Using .env file</h1>
<p>If you have used NodeJS, you've probably used or heard of <code>.env</code> files. Let's get that feature on your React app. </p>
<h3 id="case-1-using-create-react-app">Case 1: Using create-react-app</h3>
<p>If you are using the <code>create-react-app</code> to quickly set up your React app, your work to add a <code>.env</code> file is already half done. </p>
<ul>
<li>Step 1: Create a .env file in the root of your React app
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1607342832236/vyNHGI07U.png" alt="Env file in React" /></li>
<li>Step 2: Start typing the variables in the <code>.env</code> file. Remember that you need to start each variable with a <code>REACT_APP_</code> for it to work; otherwise, your variables will not be imported.<pre><code class="lang-env">REACT_APP_DATABASE=redis
REACT_APP_FIRST_RELEASE=02Nov2019
REACT_APP_LAST_UPDATE=07Dec2020
</code></pre>
</li>
</ul>
<ul>
<li>Step 3: Import them into your component using <code>process.env.REACT_APP_</code>.<pre><code class="lang-js">render() {
<span class="hljs-keyword">return</span> (
  <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
       <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>
          We are using {process.env.REACT_APP_DATABASE}
       <span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
);
}
</code></pre>
</li>
</ul>
<h3 id="case-2-not-using-create-react-app">Case 2: Not using create-react-app</h3>
<p>If you are into having more control over your project and writing webpack by yourself, you'll need to do a few more steps to set up <code>.env</code> support.</p>
<ul>
<li>Step 1: Install dotenv package in your project using <code>npm install dotenv</code> or <code>yarn install dotenv</code></li>
<li>Step 2: Import the file into your index.js file if you need support in all components or just in a particular component if you want it to be otherwise. 
<code>require('dotenv').config()</code></li>
<li>Step 3: Now, you can follow the same process in Case 1 to get your environment variables set up. You <strong>do not</strong> need to start every variable with <code>REACT_APP_</code> if you are not using <code>create-react-app</code>.
<br /><br /><h1 id="exporting-manually-from-a-js-file">Exporting manually from a .js file</h1>
It is perhaps the simplest method there is to have global variables. <ul>
<li>Step 1: Go to your <code>src</code> folder and create a new folder called <code>constants</code> or whatever you want to name it. 
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1607360283430/-DfyvJJfg.png" alt="exporting variables manually" /></li>
<li>Step 2: Create multiple variables in a new file in the above folder like <code>global.js</code> and then export them so that they can be imported into other components.</li>
</ul>
</li>
</ul>
<pre><code class="lang-js"><span class="hljs-keyword">const</span> Url = <span class="hljs-string">'http://localhost:5000'</span>
<span class="hljs-keyword">const</span> themeDefault = <span class="hljs-string">'dark'</span>
<span class="hljs-keyword">const</span> namesOfModes = [<span class="hljs-string">'dark'</span>, <span class="hljs-string">'moonlight'</span>, <span class="hljs-string">'eclipse'</span>, <span class="hljs-string">'light'</span>]

<span class="hljs-keyword">export</span> { Url, themeDefault, namesOfModes }
</code></pre>
<ul>
<li>Step 3: Now it is time we import these constants in our components</li>
</ul>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">'react'</span>
<span class="hljs-keyword">import</span> { Url, themeDefault, namesOfModes } <span class="hljs-keyword">from</span> <span class="hljs-string">'../constants/global'</span>
<span class="hljs-keyword">const</span> Component_Three = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">return</span> (
     <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
       <span class="hljs-tag">&lt;<span class="hljs-name">h1</span>&gt;</span>Current Theme: {themeDefault}<span class="hljs-tag">&lt;/<span class="hljs-name">h1</span>&gt;</span>
       <span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>Homepage: {Url}<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
     <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  )
}
</code></pre>
<p><br /><br /><br /><br />
<strong>Note:</strong> <em>You'll argue that you can use packages like Redux and RecoilJS, but let me remind you that they are state management tools, and they should not be used just to store global constants.</em>
<br /><br /></p>
<h1 id="thanks">Thanks</h1>
<p>Hope you found this article informative and intriguing. If it helped you, do give it a like 🧡 and share it with people who might find it useful. Also check out my  <a target="_blank" href="https://www.instagram.com/encodable/">Instagram</a> and <a target="_blank" href="https://www.facebook.com/enc0dable">Facebook</a> pages for more content.
<a href="https://www.buymeacoffee.com/yasharyan" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Who's responsible for WannaCry Ransomware?]]></title><description><![CDATA[To answer this question, let's go back to May 12th, 2017. Computers of the hospitals across London were stuck on a red screen which said, 

"Oops, your files have been encrypted. Send $300 worth of Bitcoins to this address." 


The dialog box also ha...]]></description><link>https://blogs.yasharyan.dev/whos-responsible-for-wannacry-ransomware</link><guid isPermaLink="true">https://blogs.yasharyan.dev/whos-responsible-for-wannacry-ransomware</guid><category><![CDATA[cybersecurity]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Sun, 29 Nov 2020 16:07:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1606666004649/n-mn2bCHM.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>To answer this question, let's go back to May 12th, 2017. Computers of the hospitals across London were stuck on a red screen which said, </p>
<blockquote>
<p>"Oops, your files have been encrypted. Send $300 worth of Bitcoins to this address." </p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606645271469/ryk3XOc8Y.png" alt="Wana_Decrypt0r_screenshot.png" />
The dialog box also had a timer that indicated when the price would increase, followed by a deadline after which the files could no longer be retrieved. </p>
<h3 id="what-is-ransomware">What is Ransomware?</h3>
<p>The idea behind Ransomware is quite simple. If your files are encrypted, and you don't have the key to decrypt them, your files are no longer readable. You can take, for example, the BitLocker feature on Windows. When you encrypt your drive, you set a passphrase for it. If you forget your code, you no longer have access to your files. The same principle is applied to Ransomware. Only, this time, someone else does it to your files without your permission and demands a ransom in return for the key. </p>
<h3 id="the-wanacrypt">The WanaCrypt</h3>
<p>This Ransomware attacking hospitals in London was called Wanacrypt, but people quickly started calling it Wannacry.  </p>
<p>The United Kingdom National Health Service or NHS had to cancel 6912 appointments, with 45 hospitals being affected. The patient registration system was not functional, the inter-department communication was broken, hospitals had to deploy runners to grab reports from various departments manually, and procedures that required high-tech interventions were suspended. There was panic all over, and hospitals began going old-fashioned, using pen and paper. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606665017796/ZFjcTn5Ti.png" alt="Windows logo.png" /></p>
<p>WannaCry was targeting Windows computers, specifically the ones that were connected to the network. But, if you think, all computers within a hospital should come under a network. But the thing is, not all computers are connected, partly for this exact reason. Systems like CT Scanner were kept isolated and not connected to the network. Affected hospitals were relying on these standalone machines to carry out their work. </p>
<h3 id="but-were-only-hospitals-the-target">But were only hospitals the target?</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606665090749/WLEKueX7P.png" alt="Nhs.png" />
Simultaneously, over 100 countries had been affected by this Ransomware, but the attack was not focused on healthcare systems. It was only the United Kingdom, where healthcare systems were particularly affected. </p>
<h3 id="lets-time-travel">Let's time travel</h3>
<p>Now, before we proceed, let's travel further back in time. This sub-story might seem completely unrelated, but it is very related.</p>
<p>Someone from within the NSA leaked it's <strong>ANT</strong> catalog, short for <strong>Advanced Network Technology</strong> catalog, to a journalist (find the file  <a target="_blank" href="https://www.eff.org/files/2014/01/06/20131230-appelbaum-nsa_ant_catalog.pdf">here</a>. Inside this catalog is a list of hacks, available exploits, and surveillance devices that the NSA can use for any mission. You select an attack that you want to carry out, get the necessary tools issued, and then proceed with the attack. </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606665811971/2X1SntGiY.png" alt="EternalBlue-exploit.png" /></p>
<ul>
<li>Out of all the tools in the catalog, one device was the <em>CottonMouse</em>. It looks like a typical USB plug, completely harmless. But in fact, when connected to the target device, it is wirelessly transmitting all the data flowing through it. Mouse clicks, keyboard strokes, external webcam data, all data is being transmitted to someone as close as the room next door. The NSA created this hardware, and it still isn't available commercially. The catalog even lists its price as $20000. Can you believe it? How powerful can this device be?</li>
<li>Another device called <em>JETPLOW</em> is an implant that could provide you backdoor access to cisco firewalls. </li>
<li>Another interesting tool is called <em>RageMaster</em>, an extension to a VGA port. When connected, it can wirelessly transmit everything the VGA adapter on the target machine sees, essentially cloning your screen. Crazy, isn't it? </li>
</ul>
<p>The devices in this catalog were intended to be used by the <strong>TAO</strong>, short for <strong>Tailored Access Operations</strong>. It is a unit within the NSA with the primary objective of target reconnaissance. TAO is the NSA's elite hacking group. They changed their names to <strong>Computer Network Operations</strong>.  </p>
<p>When security firms research hacking campaigns, they usually give hackers units a unique code name. For example, the Russians are called the <em>Fancy Bear</em>, Iranian hackers were called <em>Charming Kitten</em>, and hackers from NSA are called <em>The Equation Group</em>. It is believed that whoever is doing work for the equation group is working in the TAO.  </p>
<p>Now, the important part of this story occurs in August 2016. A Twitter feed posted by an account by the name of theshadowbrokers included a link to Pastebin that had the following text along with some pictures. </p>
<blockquote>
<p>!!! Attention government sponsors of cyber warfare and those who profit from it !!!!
How much you pay for enemies cyber weapons? Not malware you find in networks. Both sides, RAT + LP, full state sponsor tool set? We find cyber weapons made by creators of stuxnet, duqu, flame. Kaspersky calls Equation Group. We follow Equation Group traffic. We find Equation Group source range. We hack Equation Group. We find many many Equation Group cyber weapons. You see pictures. We give you some Equation Group files free, you see. This is good proof no? You enjoy!!! You break many things. You find many intrusions. You write many words. But not all, we are auction the best files.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606662384148/DfmOVi4jS.png" alt="twitter_tweet_by_theshadowbrokerss.png" />
Find the Pastebin archive <a target="_blank" href="https://archive.vn/20160815133924/http://pastebin.com/NDTU5kJQ#selection-605.0-635.589">here</a>.</p>
<p>This was not a joke. A few files, in fact, had been published on GitHub (now removed). People were looking at it, forking it. The uploaded malware was an exploit for Cisco and Fortinet firewalls. This exploit allowed the attacker to send an exploit to a fully patched firewall and will enable the hacker to take full control.<br />For the rest of the files, the auction only received $937, which was a big disappointment to the Shadow Brokers. The second dump by the ShadowBrokers was a list of IP addresses that the NAS had infected or were using as a proxy to carry out cyber attacks. It maybe was a way through which the Shadow Brokers wanted to tell the people what they were serious about the files. </p>
<p>Finally, in January 2017, ShadowBrokers made another post saying goodbye. The post said that they could not accumulate the number of bitcoins they were hoping for, so they would release more tools, for free, for everyone to see. They posted around 60 windows executables, link libraries, and drivers, claiming they were developed by the TAO and were exploiting Windows PCs.  </p>
<p>But this wasn't the last we heard from the shadow brokers. About three months later, they showed back up in the first week of April, dumping more files for the world to see, along with a message for the President of the United States, saying, </p>
<blockquote>
<p>"Respectfully, what the fuck are you doing? TheShadowBrokers voted for you. TheShadowBrokers supports you. TheShadowBrokers is losing faith in you. Mr. Trump helping theshadowbrokers, helping you. Is appearing you are abandoning “your base”, “the movement”, and the peoples who getting you elected.'  </p>
</blockquote>
<p>This dump contained EternalBlue and EternalRomance. What's unique about EternalBlue is that it can remotely access Windows PCs running SMB, installed by default on all windows machines before windows 8. But here's the interesting thing. Just before a month before shadow brokers had published EternalBlue, Microsoft had patched it. Rumors had it that the NSA had given Microsoft a very quiet heads up about its system's vulnerabilities, telling them that this might be in an upcoming dump. </p>
<h3 id="back-to-2017">Back to 2017</h3>
<p>When a cyber-attack of this scale breaks into the world, it attracts many security researchers, anti-virus companies, and threat-detection systems. Everyone is in the race to be the first one to find a fix to the problem. You have to understand that when some new threats open up to the world, there are no news pages, no blog posts, or articles about it. It's a strange time where no one knows what's happening. Everyone is talking, tweeting, and sharing screenshots, but there is no clear sense of what is actually going on. Every researcher is trying to get samples of the malware to find a cure, be it huge companies or independent researchers. Among others was a French Security Researcher named Matt Suiche. He was working on finding a fix to this Ransomware. One thing to note is that malware like this is pre-compiled, meaning if you look at the program itself, it's gibberish. It's machine code, not readable by humans. Security researchers use reverse engineer tools like Ghidra or Binary Ninja to convert it to assembly language. Now, this is readable but is very elementary. There are no if-else statements in assembly language. This low level of language requires a lot of skills to make sense of it. While reverse-engineering the malware, Matt noticed something interesting. The malware was using, wait for it, EternalBlue to gain access to PCs released just a month before by the ShadowBrokers. This Ransomware was a self-propagating one, meaning, once it infects a computer, it will try to infect every other computer on the network. </p>
<p>At the same time, another security researcher named Marcus Hutchins was looking at the malware and saw something that is very unusual for a Ransomware. He found that the malware tries to go to a specific 40 characters long URL upon infecting a computer. Wannacry would check if that URL exists, and if it did, it would stop running immediately. It won't propagate, it won't encrypt, it would just halt. Whoever created the Ransomware wanted a functional stop button. Markus checked if the URL was registered, and to his surprise, it wasn't. Strange, the creator forgot to register the domain. He quickly bought that domain name and single-handedly ended the Wannacry panic. No more new computers were getting infected.  </p>
<p>A few days after that, a new variant of WannaCry appeared. Matt immediately started working on this. He thought that to make the Ransomware functional again, the creator would just have to change the killswitch URL for the malware, and he was correct. He registered the domain. As a result, not many machines were infected. A few days later, another variant was released, and this time, Checkpoint Software technologies registered the domain quickly, and not many systems were infected by this version either. And then, a fourth variant showed up, and this time, it did not have a killswitch. This version had the potential of ripping through millions of computers worldwide, but this malware never caught the flow. Maybe it wasn't effective, or anti-virus companies had already detected it and put out signatures for it, or people had already updated their PCs.  </p>
<h2 id="who-was-held-responsible">Who was held responsible?</h2>
<p>The US Department of Justice issued a press release where they held a North Korean computer programmer, Park Jin Hyok, responsible for the cyberattack on Sony Pictures, the Bangladesh bank heist, and creating the WannaCry Ransomware. The FBI put him up on the cyber's top 10 most wanted criminals.
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606664285712/LWEx8Jf_p.jpeg" alt="preview.jpg" /></p>
<p>But, wait a minute. Bear with me for some more time. 
As people investigated it further, they found out that there were earlier versions of wannacry which weren't effective because they were not using EternalBlue, because it hadn't been released yet. But on May 9th, 2017, a company called RiskSense published a proof of concept using EternalBlue as an exploit, including source code, and explained how to use it. Three days later, a new version of wannacry with EternalBlue was released, and it used the same code from the blog post.  </p>
<p>So does this mean that we can point fingers at RiskSense? </p>
<p>Okay, let's compare the facts.<br />North Korea pulled the trigger on WannaCry, but they may not have done it if they hadn't seen the blog post by RiskSense, but RiskSense may not have written that blog post if it wasn't for the ShadowBrokers dumping those stolen files to the public, which they wouldn't have done if NSA hadn't developed those exploits, to begin with, but EternalBlue would have never existed if Microsoft had caught the bug during development and testing.  </p>
<p>Well, that is a long blame game, and I will leave it to you to decide who to hold responsible.</p>
<h1 id="thanks">Thanks</h1>
<p>Thanks for reading this article. Also check out my  <a target="_blank" href="https://www.instagram.com/encodable/">Instagram</a> and <a target="_blank" href="https://www.facebook.com/enc0dable">Facebook</a> pages for more content.
<a href="https://www.buymeacoffee.com/yasharyan" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Insight into Cryptography]]></title><description><![CDATA[In this article,

What is it?
Origin of cryptography
Why do we need it?
Ciphers and Codes
Caesar Shift Cipher
Transposition
Vigenère
Enigma Code
Steganography
Morse Code
Public-Key Ciphers
Why Cryptography?

What is it?
The DuckDuckGo community, in s...]]></description><link>https://blogs.yasharyan.dev/insight-into-cryptography</link><guid isPermaLink="true">https://blogs.yasharyan.dev/insight-into-cryptography</guid><category><![CDATA[Cryptography]]></category><category><![CDATA[cybersecurity]]></category><dc:creator><![CDATA[Yash Aryan]]></dc:creator><pubDate>Wed, 15 Apr 2020 16:43:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1606668291702/LhLEJXE3r.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="in-this-article">In this article,</h3>
<ul>
<li>What is it?</li>
<li>Origin of cryptography</li>
<li>Why do we need it?</li>
<li>Ciphers and Codes</li>
<li>Caesar Shift Cipher</li>
<li>Transposition</li>
<li>Vigenère</li>
<li>Enigma Code</li>
<li>Steganography</li>
<li>Morse Code</li>
<li>Public-Key Ciphers</li>
<li>Why Cryptography?</li>
</ul>
<h2 id="what-is-it">What is it?</h2>
<p>The DuckDuckGo community, in simple words, defines cryptography as 'the art of secret writing.' A more technical definition can be "the practice and study of techniques for secure communication in the presence of third parties called adversaries," as on Wikipedia. The word cryptography has derived from the Greek language, κρυπτός (Kryptos), meaning hidden and γράφειν (graphia), meaning writing.</p>
<blockquote>
<p>“ Acknowledge with cryptography no amount of violence will ever solve a math problem.”<br />
-Julian Assange</p>
</blockquote>
<p>Remember when your teacher made you and your best friend on the far sides of the classroom because you couldn't keep hush for even a minute? And then you would start passing chits to communicate because you couldn't keep your crush on that cute girl or boy in your stomach? You would write it in a manner that curious minds could not figure out what in God's name you wrote on it. That secret code you used would be a code or a cipher if what you wrote was a unique language (not the 'ugly, not understandable' version of your handwriting). Although that time, the stakes were not that high (yeah, apart from the embarrassment), but now they are. While the world has shifted to the Internet Era, the information of the world has too. It's not just about protecting messages that are being sent but also about the data that has been stored in the cloud.</p>
<blockquote>
<p>“One must acknowledge, with cryptography, no amount of violence will ever solve a math problem.”<br />
-Jacob Appelbaum (Ex-core member of Tor Project)</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606668552643/LOyMXWzNJ.jpeg" alt="lock and a cloud.jpg" />
Cryptography is about constructing such a protocol that helps the information sender keep his information inaccessible to the unwanted eyes. As coined on Wikipedia, it could be termed "the conversion of information from a readable state to apparent nonsense." As described on exploratorium.edu, to be useful, certain things should be known about a cipher at both the receiving and the sending ends.</p>
<ul>
<li>The algorithm or the method used to encipher the original message (a.k.a. plaintext).</li>
<li>The key used with the algorithm to allow plaintext to be enciphered and deciphered.</li>
<li>The period for which the key is valid.</li>
</ul>
<p>Think about this. You and your master-thief friend manage to steal the key of some wealthy businessperson. Now, your objectives could be visualized as:</p>
<ul>
<li>ALGORITHM: You, locating the businessman's home and reaching his front door when it is the best time.</li>
<li>KEY: The key you managed to steal is the cipher key.</li>
<li>TIME: It won't be long before the businessman figures out that his keys are missing. This is the period for which your key is valid.</li>
</ul>
<h2 id="origin-of-cryptography">Origin of Cryptography</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606668682723/ABTKtpENc.jpeg" alt="Cryptography origin" />
The word 'cryptography' emerged around the 19th century, in the novel The Gold-Bug, by Edgar Allan Poe. Cryptography emerged from the times when people got organized in tribes, groups, and kingdoms. Ideas of battles, supremacy, and politics. The first known evidence of cryptography can be traced to the Egyptian Civilization, about 4000 years ago. The kings used 'hieroglyph' to communicate with each other with the help of scribes. These scribes were the only ones who knew how to read and write these scripts, and it was through them that the kings exchanged messages.</p>
<p>As time passed by, similar codes were coming up, a little advanced than their predecessors. And finally, there came Codes and cipher these days that are so hard to crack that it's virtually impossible for an average human to crack them using brute force. Even with the help of massive supercomputers, it's tiresome to break coded and ciphered messages with brute force.</p>
<h2 id="ciphers-and-codes">Ciphers and Codes</h2>
<p>Well, the difference between a <strong>cipher</strong> and a <strong> code</strong> is quite simple. A cipher changes a message on a letter-by-letter basis, while a code converts the entire plaintext (the message) into other words or numbers.</p>
<ul>
<li><p><strong>Code</strong>: A code is a mapping from some meaningful message into something else, usually a group of symbols or characters that don't make any sense. The code also requires a codebook that lists out all the mappings. Codes usually make no meaning unless decoded using a codebook, which is like the key to every written symbol on the message.</p>
</li>
<li><p><strong>Cipher</strong>: A cipher (or cypher) is a pair of algorithms that create the encryption and the reversing decryption. To decrypt the cipher, you need both the key, as well as the decrypting algorithm. A key is a secret passphrase that is used to run the algorithm to decrypt the cipher. Without the key, the algorithm is useless. It is like you opening someone's suitcase without knowing their combination. You end up trying all the possible combinations you can try.</p>
</li>
</ul>
<h2 id="what-are-examples-of-ciphers-and-codes">What are examples of Ciphers and Codes?</h2>
<h3 id="caesar-shift-cipher">Caesar Shift Cipher</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606668891973/_1hRW2Au5.jpeg" alt="Caesar Shift Cipher" />
In the year 753 BC, the world saw the rise of the Roman Empire. With the Romans, there came a new method into existence called The Caesar Shift Cipher. In this method, the ciphered message relied on shifting the letters by a number discussed and agreed upon beforehand. The receiver would then move the letters back by the decided amount to receive the original message. This was one of the simplest forms of ciphers one could find.</p>
<ul>
<li>Let's suppose that Luke Skywalker, along with his rebel alliance, decided to storm into the Death Star. Now, assume that there is a message that has to be conveyed to the rebel base and, at the same time, protecting it from Darth Vader: "ALPHA TEAM AT POSITION."</li>
<li>Let's suppose that the agreed-upon decryption key is 5. So each alphabet in the message would be shifted by five digits, clockwise.</li>
<li>"FQUMF YJFR FY UTXNYNTS" will be the ciphered text which the base would receive, and anyone who manages to capture the message. This would make no sense, but this method is old and can be easily cracked using processors as early as Intel 4004.</li>
</ul>
<h3 id="transposition">Transposition</h3>
<p>In this type of Cipher, the letters are moved around, with some regular pattern, to make a jumbled sentence, which would make no sense. Let's, for example, take the phrase; THE POLICE IS COMING FOR YOU. To convert this into a transposition cipher, we can use what's called a depth-two rail fence.</p>
<p>So this phrase now would be written as</p>
<p>T E O I E S O I G O Y U</p>
<p>H P L C I C M N F R O</p>
<p>(or)</p>
<p>TEOIESOIGOYUHPLCICMNFRO</p>
<h3 id="vigenere">Vigenère</h3>
<p> Vigenère was considered one of the most robust ciphers to break and kept hackers on their toes for almost three centuries. Because of this, it earned the title of <strong>Le Chiffre indèchiffrable</strong>, which is French for 'the indecipherable ciphers.
This method is similar to Caesar Shift Cipher. The only difference is the key. The key is, in this method, not a number but a series of interwoven Caesar ciphers. It is a form of polyalphabetic substitution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606668995578/MFjX1AzWT.png" alt="Vigenère cipher" />
Now, let's suppose the message that has to be sent is 'SENDMORETROOPS.' Now, say the cipher key was 'VICTOR.' The encryption is done in such a way that the word VICTOR is written below the plaintext and repeats it until it matches the length of the plaintext. The word from the plaintext and the word from the key is mapped to the grid, where each row starts with a key letter. Thus, we would get our encrypted message, which in this case would be 'NMPWAFMMVKCFKA'</p>
<h3 id="enigma-codes">Enigma Codes</h3>
<p>Some people say that the Allied power won a part of the war when Turing broke the Enigma Code. The Germans used to communicate messages from this machine called the Enigma Machine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606669159193/NKK0_QHTx.jpeg" alt="Enigma" />
The device looked like a typewriter, in which punching the letters would create the cipher letter. Now, there were several wheels inside the machine that were connected through wires to the letters. All Enigma Machines were identical, and knowing the initial configurations were essential to decipher the message. Also, the cipher key was not fixed; it kept on changing within a message. Each wheel would rotate after a certain number of letters were typed. Even if the Allies managed to procure a copy of the Enigma Machine, they would not be able to decipher any of the messages because they did to know the initial wheel configuration. What Alan Turing made to decrypt the Enigma code was considered to be the ancestor of modern computers.</p>
<h3 id="steganography">Steganography</h3>
<p>Remember The Dancing Men case from Sherlock Holmes? These messages were nothing that made sense and looked as if the cavemen had returned to the world. But when Sherlock gets more of those messages, he understood that they were not someone fooling around, but coded messages. Think about this; you see a message that has been coded or ciphered. Now, if you are a curious mind, you would start decrypting the message with all your might. You may as well manage to get the real message out of it. This was not what the senders of the message would have wanted, did they? To prevent this type of situation, the sender of the message will hide this message in some form that would be invisible to the third party eye, such as in a picture or woodwork.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606669217322/I-SxRaAhr.jpeg" alt="Steganography.jpg" /></p>
<p>Such types of coded messages could be traced back to as far as ancient Greece. In this type of message, the actual message is concealed in some form so that it does not raise suspicion in third parties. If the software to decode the QR code was not public, it might as well had been an effective way to communicate coded messages. This is the perfect way to hide data or messages in an innocent-looking picture. No one could tell that this holds messages just by looking at it. A QR code can store a maximum of 177 rows and 177 columns, which means the maximum number of the module is 31,329. The arrangements of these modules help the store to date. The maximum amount of data a QR code can store is about 3Kb.</p>
<h3 id="morse-code">Morse Code</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606669250548/k-Tz-koMX.jpeg" alt="Morse Code" />
Morse Code is a way of transmitting messages in the form of signals. The text can be encoded in the form of short and long signals or the form of on and off. The two different signal durations are dots and dashes. The dot duration is the basic unit of time measurement in Morse Code transmission. The period of the dash is usually kept three times the period of the dot. Nowadays, Morse Code is more of an emergency helpline than a message encoding method.</p>
<h3 id="public-key-ciphers">Public Key Ciphers</h3>
<p>Think about this. You have successfully created a procedure through which no human could ever break your cipher. There exists no device that can crack it without providing the code. It’s cool; you would share messages with extreme privacy. But then, what about the key? You cannot share your keys through your cipher, can you now? I mean, how would the receiver decode the message to get the key without the key? Sure! You would tell her the key in her ears, but don’t you know that the walls have ears too? What if the receiver is on the other side of the planet? Send a post? What if the postman or anyone in the postal network is the spy? You can trust no one. To overcome this limitation, the Public Key Ciphers were introduced.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606669296359/a-QmMGZ3A.jpeg" alt="public key encryption.jpg" />
This is considered to be the <em>Ultimate Modern Cipher</em>. Every Public Key Cipher has two keys, one public and one private. The two keys belong to the person receiving the message. One key is a public key and may be given to anybody. The other key is a private key and is kept secret by the owner. Your password, which you make for an account while setting it up, is your private key. The system automatically generates a public key for you once the account creation is successful. This public key is made available to anyone using the internet. The cool thing about this is that if something is encrypted using your public key, only you can decrypt it using your private key.</p>
<p>Now suppose you are sending secure mail to someone from work. Before submitting, the mail service gets the receiver's public key and encrypts the mail content using that key. When the receiver receives the mail, only his password (private key) can decrypt the content. So this is how it works.</p>
<p>Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications. Until modern times, cryptography referred almost exclusively to encryption, which is the process of converting ordinary information (called plaintext) into an unintelligible form (called ciphertext).</p>
<h2 id="why-cryptography">Why Cryptography?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1606669374446/0K6a7EOB5.jpeg" alt="key on a chip.jpg" />
If you are still asking that question, let me create a scenario for you. "You live in a world where cryptography doesn't exist. You are finally buying that one thing that you have been waiting to buy for quite some time now. You log into an online store and click on the buy now option. You are redirected to a page where you have to enter your card details. You type your card number XXXX XXXX XXXX XXXX and then the Card Verification Value (CVV) XXX. That's done now. You have completed a successful transaction. You get your product shortly. A few days later, you get a notification from a bank about a transaction that you did not make. You check with your bank, and they tell you about your card being used. You immediately instruct them to block your card. They do as instructed but could not recover the spent money.</p>
<h3 id="what-happened">What happened?</h3>
<p>Since the payment gateway didn't use any encryption, a hacker nearby, who was snooping data packets, managed to get your packet and read it. He got your card number, CVV, and all the other information you sent to the payment company. This was possible for him because whatever details you sent over the internet were in plaintext, not hashes.</p>
<p>Similarly, a lot is at stake when encryption is not used. Military secrets, medical information, bank details, passwords on servers, texts, emails, and phone calls are all encrypted for security. Right to privacy is even a thing in most of the countries in the world. Hence, encryption is one of the primary necessities in this digital age.</p>
<h1 id="thanks">Thanks</h1>
<p>Thanks for reading this article. Also check out my  <a target="_blank" href="https://www.instagram.com/encodable/">Instagram</a> and <a target="_blank" href="https://www.facebook.com/enc0dable">Facebook</a> pages for more content.
<a href="https://www.buymeacoffee.com/yasharyan" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" /></a></p>
]]></content:encoded></item></channel></rss>