<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Tingwai's Tech Blog]]></title><description><![CDATA[A personal blog about DevOps, Programming and Security. Generally, it's all about the IT industry, technical and non-technical alike.]]></description><link>https://twcloud.tech/</link><generator>Ghost 5.82</generator><lastBuildDate>Mon, 20 Apr 2026 00:42:51 GMT</lastBuildDate><atom:link href="https://twcloud.tech/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Importance of Git Branching Strategy in DevOps]]></title><description><![CDATA[Why and how git branching strategies matter in DevOps]]></description><link>https://twcloud.tech/2024/04/21/the-significance-of-git-branching-strategy-in-devops/</link><guid isPermaLink="false">66236f627363020001511205</guid><category><![CDATA[Development]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sun, 21 Apr 2024 00:00:41 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1556075798-4825dfaaf498?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGdpdHxlbnwwfHx8fDE3MTM1OTgzNDJ8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h3 id="introduction-to-git-branching-strategy">Introduction to Git Branching Strategy</h3><img src="https://images.unsplash.com/photo-1556075798-4825dfaaf498?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGdpdHxlbnwwfHx8fDE3MTM1OTgzNDJ8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="The Importance of Git Branching Strategy in DevOps"><p>In the world of software development, understanding Git branching strategy is crucial for effective collaboration and code management. This strategy allows you to work on different aspects of a project without interfering with the main codebase. By creating branches, you can isolate changes, experiment with new features, and fix issues without impacting the master branch.</p><h3 id="why-is-a-branching-strategy-important">Why is a branching strategy important?</h3><p>A branching strategy provides a structured approach to developing code, enabling multiple developers to work on diverse tasks simultaneously. It helps in maintaining a clean codebase, tracking changes efficiently, and facilitating code review and collaboration. By following a branching strategy, you can avoid conflicts, easily revert changes if needed, and ensure a seamless integration process.</p><h3 id="common-branching-models">Common branching models</h3><ol><li>Feature Branching: Ideal for developing new features or functionalities separate from the main codebase.</li><li>Release Branching: Useful for preparing a stable version of the project for release by isolating it from ongoing development.</li><li>Hotfix Branching: Necessary for quickly addressing critical issues in the production environment without disrupting other development work.</li></ol><p>Git branching strategies like <a href="https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow?ref=twcloud.tech" rel="noreferrer">Gitflow</a>, <a href="https://docs.github.com/en/get-started/using-github/github-flow?ref=twcloud.tech" rel="noreferrer">GitHub flow</a>, and <a href="https://about.gitlab.com/topics/version-control/what-is-gitlab-flow/?ref=twcloud.tech">GitLab flow</a> provide guidelines on when and how to create branches, merge code, and handle releases, enabling teams to streamline their development processes and maintain code quality.</p><p>Understanding and implementing an effective branching strategy in Git can significantly improve your team&apos;s productivity, code quality, and overall DevOps practices. By mastering the art of branching, you can better manage project timelines, collaborate seamlessly with team members, and ensure a robust and stable codebase.</p><h2 id="benefits-of-implementing-a-solid-git-branching-strategy">Benefits of Implementing a Solid Git Branching Strategy</h2><ul><li>Organized Development: With a clear branching strategy in Git, you can easily compartmentalize different features, bug fixes, and experiments without affecting the main codebase.</li><li>Improved Collaboration: By using branches effectively, team members can work on separate tasks concurrently, leading to enhanced collaboration and faster development cycles.</li><li>Code Stability: A well-defined branching strategy ensures that the main branch (often &apos;master&apos; or &apos;main&apos;) always contains stable, production-ready code, reducing the risk of introducing bugs to the live environment.</li><li>Flexibility in Workflow: Each branch can serve a distinct purpose &#x2013; feature development, bug fixing, testing, etc. This allows for flexible workflow management and easier tracking of changes.</li><li>Risk Mitigation: By segregating work into branches, you minimize the risk of breaking the main codebase. If an issue arises in a feature branch, it can be addressed without affecting the rest of the codebase.</li><li>Version Control: Git branching strategy provides an efficient way to manage different versions of your software, enabling you to maintain multiple release versions and hotfixes simultaneously.</li><li>Continuous Integration and Deployment (CI/CD): A solid branching strategy aligns well with CI/CD practices, enabling automated testing, integration, and deployment processes to run smoothly.</li><li>Easier Rollbacks: In case a new feature causes unforeseen issues in production, having a clear branching strategy makes it easier to identify the changes and roll back to a previous stable state swiftly.</li></ul><p>By implementing a well-thought-out Git branching strategy, you can streamline your development process, increase team productivity, and ensure code quality and stability.</p><h2 id="types-of-git-branching-strategies">Types of Git Branching Strategies</h2><ul><li>Feature Branching: Create a new branch for each new feature or task.Isolate changes related to a specific feature.Allows for parallel development without affecting the main codebase.</li><li>Release Branching: Create a branch specifically for preparing a release.Allows for last-minute changes or fixes without disrupting ongoing development.Ensures that the code in the release branch is stable and ready for deployment.</li><li>Hotfix Branching: Create a branch to address critical issues in production.Prioritize fixing bugs or issues without interfering with regular development work.Allows for quick deployment of urgent fixes to production.</li><li>Main/Branch (Trunk-Based Development): Keep the main branch as the primary development branch.Encourages small, frequent commits directly to the main branch.Facilitates continuous integration and quick feedback loops.</li><li>GitFlow Branching Model: Defines specific branches for different stages of development (e.g., feature, develop, release, hotfix).Enforces a strict branching and merging workflow.Provides a structured approach to managing feature releases.</li></ul><p>When choosing a branching strategy, consider factors such as team size, project complexity, release frequency, and deployment requirements. Experiment with different strategies to find the one that best fits your team&apos;s workflow and project needs.</p><h2 id="best-practices-for-git-branching-strategy-in-devops">Best Practices for Git Branching Strategy in DevOps</h2><p>When it comes to Git branching in DevOps, following best practices can help streamline your development workflow and ensure smooth collaboration within your team. Here are some key tips for an effective branching strategy:</p><ul><li>Use Branches for Isolating Features: Create separate branches for each new feature or bug fix to keep changes isolated until they are ready to be merged into the main branch.</li><li>Keep Branches Small and Focused: Avoid having large, complex branches that make it difficult to review and merge code. Keep branches small and focused on specific tasks.</li><li>Regularly Merge Changes to Main: To prevent conflicts and integration issues, regularly merge changes from the main branch into your feature branches and vice versa.</li><li>Utilize Feature Flags: Implement feature flags to selectively enable or disable new features, allowing for continuous deployment while keeping unfinished features hidden from users.</li><li>Automate Testing: Set up automated testing pipelines to run tests on each branch, ensuring that code changes do not introduce bugs or regressions.</li><li>Code Reviews: Encourage peer code reviews to maintain code quality, identify potential issues early, and share knowledge across team members.</li><li>Versioning: Use semantic versioning to clearly communicate the impact of changes and ensure compatibility between different versions of your software.</li><li>Clear Naming Conventions: Establish clear naming conventions for branches to indicate their purpose, such as <code>feature/</code>, <code>bugfix/</code>, or <code>hotfix/</code>.</li></ul><p>By following these best practices, you can enhance the efficiency of your Git branching strategy in DevOps, promote collaboration, and deliver high-quality software more consistently.</p><h2 id="impact-of-git-branching-strategy-on-team-collaboration">Impact of Git Branching Strategy on Team Collaboration</h2><p>When it comes to team collaboration, the Git branching strategy plays a crucial role in ensuring smooth and efficient workflows. By understanding the impact of Git branching strategies on your team, you can enhance collaboration and productivity significantly.</p><ul><li>Isolation of Work: Branching allows team members to work on features or fixes independently without interfering with each other&apos;s code. This isolation reduces conflicts and enables seamless parallel development.</li><li>Improved Code Quality: With branching strategies like feature branches or pull request workflows, team members can review each other&apos;s code changes before merging them into the main branch. This process enhances code quality and reduces the chances of introducing bugs.</li><li>Flexibility in Development: Git branching strategies provide flexibility in managing different aspects of the development process. For instance, using release branches allows teams to prepare and stabilize code for deployment without disrupting ongoing development tasks.</li><li>Enhanced Communication: By adopting branching strategies that encourage regular communication, such as utilizing branch naming conventions or tagging for releases, teams can improve collaboration and keep everyone informed about the project&apos;s progress.</li><li>Conflict Resolution: Branching strategies help in resolving conflicts that may arise when multiple team members are working on the same codebase. By following branching best practices, conflicts can be minimized, and resolution becomes more systematic.</li><li>Encouraging Experimentation: Branching strategies like experimental or feature toggles enable teams to experiment with new ideas or functionalities without impacting the main codebase. This promotes innovation and creativity within the team.</li></ul><p>Git branching strategy has a profound impact on team collaboration by fostering isolation, improving code quality, providing flexibility, enhancing communication, aiding conflict resolution, and encouraging experimentation. By carefully choosing and implementing the right branching strategy, teams can streamline their workflows and achieve greater success in their projects.</p><h2 id="automating-workflows-with-git-branching-strategy">Automating Workflows with Git Branching Strategy</h2><ul><li>Git branching strategy helps you automate workflows by providing a structured approach to managing code changes.</li><li>By using branches, you can work on different features independently without affecting the main codebase.</li><li>CI/CD tools like <a href="https://www.jenkins.io/?ref=twcloud.tech" rel="noreferrer">Jenkins</a> can be integrated with Git branches to trigger automated builds and tests whenever changes are pushed to specific branches.</li><li>With Git branching strategy, you can automate the deployment process by defining automated pipelines that push changes to different environments based on the branch being merged.</li><li>By automating workflows with Git branching strategy, you can streamline the development process, reduce human errors, and ensure a more efficient and reliable delivery of software.</li><li>Automating workflows through Git branching strategy also enables teams to have better collaboration and visibility into the progress of different features being developed simultaneously.</li><li>Continuous integration and continuous deployment practices can be easily implemented with a well-defined Git branching strategy, ensuring that code changes are tested and deployed rapidly and reliably.</li><li>Automating workflows with Git branching strategy ultimately leads to faster delivery of features, increased team productivity, and higher quality software releases.</li></ul><h2 id="using-git-branching-strategy-to-enhance-continuous-integrationcontinuous-deployment-cicd-pipelines">Using Git Branching Strategy to Enhance Continuous Integration/Continuous Deployment (CI/CD) Pipelines</h2><p>Git branching strategy plays a crucial role in improving your CI/CD pipelines. By utilizing branching effectively, you can enhance collaboration and ensure a smooth workflow within your DevOps environment. Here&apos;s how you can leverage Git branching to optimize your CI/CD pipelines:</p><ul><li>Feature Branches: Create separate branches for each new feature or bug fix. This allows developers to work independently on different tasks without interfering with each other&apos;s code. Once the feature is completed, merge it back into the main branch for integration.</li><li>Release Branches: Before deploying any major changes to production, create a release branch. This branch allows you to test and fine-tune the upcoming release without disrupting the main branch. Once the release is stable, merge it back into the main branch and deploy it.</li><li>Hotfix Branches: In case of urgent bug fixes in the production environment, create a hotfix branch from the main branch. This branch should be used exclusively for fixing critical issues. After the hotfix is tested and verified, merge it back into both the main branch and the release branch.</li><li>Pull Requests: Encourage code reviews and collaboration by using pull requests. When a developer completes a task on a branch, they can create a pull request to merge their changes into the main branch. This promotes quality control and ensures that only approved code enters the main codebase.</li></ul><p>By incorporating these Git branching strategies into your CI/CD pipelines, you can streamline your development process, reduce errors, and facilitate a more efficient workflow for your DevOps team.</p><h2 id="dealing-with-challenges-and-pitfalls-of-git-branching-strategy-in-devops">Dealing with Challenges and Pitfalls of Git Branching Strategy in DevOps</h2><p>When facing challenges and pitfalls with your Git branching strategy in DevOps, it&apos;s crucial to address them promptly to maintain a smooth workflow. Here are some ways to tackle common issues:</p><ul><li>Merge conflicts: These can arise when multiple developers are working on the same file simultaneously. To handle this, communicate effectively with your team to avoid overlapping changes. If conflicts occur, use tools like <code>git diff</code> to identify and resolve them efficiently.</li><li>Branch clutter: As your project progresses, you may end up with numerous branches, making it challenging to track changes. Regularly clean up obsolete branches to declutter your repository and enhance visibility. You can use commands like <code>git branch --merged</code> to identify merged branches for deletion.</li><li>Inconsistent naming conventions: When team members use different naming conventions for branches, it can lead to confusion and errors. Establish clear guidelines for branch naming to maintain consistency across the team. For instance, consider using prefixes like <code>feature/</code> or <code>bugfix/</code> to categorize different types of branches.</li><li>Lack of code reviews: Skipping code reviews in the branching process can result in poor code quality and integration issues down the line. Encourage peer reviews before merging branches to ensure code reliability and alignment with project standards.</li><li>Over-reliance on long-running branches: Long-lived branches can complicate the merging process and delay feedback integration. Aim to keep branch lifetimes short by breaking down features into smaller tasks and merging them to the main branch frequently.</li></ul><p>By being proactive in addressing these challenges and pitfalls, you can streamline your Git branching strategy in DevOps and enhance collaboration within your team. Stay vigilant, communicate effectively, and adapt your approach as needed to overcome obstacles and optimize your workflow effectively.</p><h2 id="future-trends-in-git-and-devops">Future Trends in Git and DevOps</h2><ul><li>Increased automation: With the advancement of technology, there will be a greater emphasis on automating various aspects of Git and DevOps processes. This will streamline workflows and reduce errors.</li><li>Integration with CI/CD pipelines: Git will continue to integrate seamlessly with Continuous Integration/Continuous Deployment pipelines, allowing for faster and more reliable software delivery.</li><li>Adoption of GitOps: GitOps, which involves using Git as a single source of truth for infrastructure automation, will become more popular. This approach aligns well with Git branching strategies and enhances collaboration between development and operations teams.</li><li>Enhanced security features: Future Git branching strategies will likely include more robust security features to protect code repositories and sensitive information, ensuring compliance with data protection regulations.</li><li>Focus on scalability: As organizations grow and projects become more complex, Git branching strategies will need to scale accordingly. Future trends may include strategies for managing branching in large, enterprise-level projects.</li><li>Evolution of branching models: New branching models may emerge in response to changing development practices and project requirements. Teams may experiment with different strategies to find the most efficient way to work with Git branches.</li></ul><p>By staying up to date on these future trends in Git and DevOps, you can ensure that your workflows are optimized for efficiency, collaboration, and security in an ever-evolving technological landscape.</p>]]></content:encoded></item><item><title><![CDATA[Patching with Git]]></title><description><![CDATA[Creating and applying patches with Git, a simple yet powerful tool for development.]]></description><link>https://twcloud.tech/2017/03/22/patching-with-git/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f42d</guid><category><![CDATA[Development]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Wed, 22 Mar 2017 04:11:42 GMT</pubDate><media:content url="https://twcloud.tech/content/images/2017/04/github.png" medium="image"/><content:encoded><![CDATA[<img src="https://twcloud.tech/content/images/2017/04/github.png" alt="Patching with Git"><p>There are times when we wanted to update our web application on the server-side but our shared hosting server blocked our outgoing SSH ports preventing us from using <code>git</code> to update the web applications on a shared hosting. So instead of using <code>git</code> with deploy key on the server, we go for server-side update with <code>git diff</code> to create a patch and <code>git apply</code> to apply the patch. In order for this to work, <strong>SSH access to the server</strong> is required.</p>
<h2 id="creating-a-patch-file">Creating a Patch File</h2>
<p><strong>Option 1:</strong> Creating patch from working copy (not committed to Git yet): <code>git diff &gt; patch.diff</code></p>
<p><strong>Option 2:</strong> Creating patch starting from one revision until another newer revision, remember to replace <code>&lt;from-commit-hash&gt;</code> and <code>&lt;to-commit-hash&gt;</code> to the revision hash you need. To get a list of revision hashes, use <code>git log</code>: <code>git diff &lt;from-commit-hash&gt; &lt;to-commit-hash&gt; &gt; patch.diff</code></p>
<h2 id="applying-a-patch">Applying a Patch</h2>
<ol>
<li>Copy the patch file over to the server either through <code>SCP</code> or <code>FTP</code></li>
<li><code>cd</code> over to the root directory of application where you are applying the patch</li>
</ol>
<p>Once you are logged into the remote server, you can patch the original file specified in the original <code>git diff</code> command or you can patch a different file from the command.</p>
<h3 id="patching-the-original-file">Patching the original file</h3>
<p><code>git apply --ignore-space-change --ignore-whitespace patch.diff</code>, where <code>patch.diff</code> is the patch file</p>
<h3 id="patching-a-different-file">Patching a different file</h3>
<p><code>patch -p1 &lt;file-to-patch&gt; patch.diff</code>, where:</p>
<ul>
<li><code>patch.diff</code> is the patch file</li>
<li><code>&lt;file-to-patch&gt;</code> is the file to apply the patch</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[MeteorJS Deployment the Proper Way]]></title><description><![CDATA[<p>Meteor JS is undoubtedly a great framework that makes it very fast to learn and prototype. However, the downfall of the framework is when we try to move it to production, the documentation is scarce. This is also a problem we faced when we try to migrate our code for</p>]]></description><link>https://twcloud.tech/2016/06/22/meteorjs-deployment-the-proper-way/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f42c</guid><category><![CDATA[Development]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Wed, 22 Jun 2016 07:27:00 GMT</pubDate><media:content url="https://twcloud.tech/content/images/2017/04/meteorjs.png" medium="image"/><content:encoded><![CDATA[<img src="https://twcloud.tech/content/images/2017/04/meteorjs.png" alt="MeteorJS Deployment the Proper Way"><p>Meteor JS is undoubtedly a great framework that makes it very fast to learn and prototype. However, the downfall of the framework is when we try to move it to production, the documentation is scarce. This is also a problem we faced when we try to migrate our code for production. There are a few things to note before releasing Meteor App for production:</p>
<ol>
<li>Remove Insecure</li>
<li>Remove Autopublish</li>
<li>Make sure that Meteor JS environment variables are set correctly</li>
</ol>
<h2 id="insecure-package">Insecure Package</h2>
<p>Meteor comes installed with <code>insecure</code> package which makes development quick. It allows the client-side to edit (only insert, update and delete) database directly without writing any codes on the server-side but when you are moving your app to production, it should be removed for security reasons. Once this package is removed, you no longer able to use <code>insert()</code>, <code>update()</code>, <code>upsert()</code> and <code>remove()</code> methods from the collections API, instead you need to move that part of the code to the server-side&apos;s <code>Meteor.method()</code> function and on the client-side, replace it with <code>Meteor.call()</code> function. For more information on how it can be achieved, please refer to <a href="https://www.meteor.com/tutorials/blaze/security-with-methods?ref=twcloud.tech">this</a> Meteor Tutorial</p>
<h2 id="autopublish-package">Autopublish Package</h2>
<p>Like the <code>insecure</code> package, this is also installed on Meteor&apos;s fresh installation. However, it does not involve any database editing features, instead, this package is about the database&apos;s <code>find()</code> method. Once this package is removed, your app&apos;s <code>find()</code> method no longer works, you have to add <code>Meteor.subscribe()</code> method on the client-side and <code>Meteor.publish()</code> on the server-side. What I found making this feature confusing is its naming of the methods, it uses the same Model.find() method on the client.</p>
<p>This feature is when the controller changes, it will request a subset of the data to be stored on the client-side, so when <code>Model.find()</code> is called on the client-side, it is actually querying the data that is stored within client. If you have conditions that filter the data by field on the client-side, make sure that those fields are also selected on the server-side <code>publish()</code> method. For more information on how to migrate to publish/subscribe method, please refer to <a href="https://www.meteor.com/tutorials/blaze/publish-and-subscribe?ref=twcloud.tech">this tutorial</a>.</p>
<h2 id="meteor-js-environment-variables">Meteor JS Environment Variables</h2>
<p>Other than <code>insecure</code> and <code>autopublish</code> package, it also comes with a feature called <em>Hot Code Push</em>. This is a feature that when you changed your template, Meteor JS will push the update to all clients. This makes it convenient for updating the clients, but makes debugging hard if you don&#x2019;t know this feature is enabled by default. We have had this issue when we first deployed the app on Digital Ocean as we followed the deployment guides available through the Internet. Symptoms we encountered were:</p>
<ol>
<li>When the application is first launched (launch after a fresh installation), the screen seems to refresh automatically.</li>
<li>Once it the screen refreshed, the app no longer able to contact the server.</li>
</ol>
<h4 id="symptom-1">Symptom #1</h4>
<p>This is due to Meteor Cordova&apos;s <code>Hot Code Push</code> pushing new codes the client, once it gets the new code, it refreshes the screen. This is supposed to happen only when I rebuilt the application, not for every clients. To prevent this, environment variable <code>AUTOUPDATE_VERSION</code> needs to be set, if it&apos;s changed, an update will be sent to the client. For more information about <code>AUTOUPDATE_VERSION</code> please refer to the <a href="https://github.com/meteor/meteor/blob/bc6bfceacf767ae878a20cf164f26b9cbf96493d/packages/autoupdate/QA.md?ref=twcloud.tech#autoupdate_version">FAQ</a></p>
<h4 id="sympton-2">Sympton #2</h4>
<p>MeteorJS <code>Hot Code Push</code> feature also allow the update to the server&apos;s URL, this is a handy feature for migration of the server. However, it is also required that the <code>ROOT_URL</code> environment variable to be set correctly on the server-side, it is what we have encountered with our application. After 1 whole day of debugging trying to figure out what went really wrong, we finally figured that it is this environment variable that is misconfigured. If you are not migrating to a new server, make sure that <code>ROOT_URL</code> is the same as <code>--server</code> parameter you used to build the app. For more information about this issue, please refer to <a href="https://github.com/meteor/meteor/issues/3698?ref=twcloud.tech">this Github issue</a></p>
<h2 id="my-final-upstart-configuration">My Final Upstart Configuration</h2>
<pre><code>#upstart service file at /etc/init/meteor-service.conf
description &quot;Meteor.js (NodeJS) application for eaxmple.com:3000&quot;
author &quot;rohanray@gmail.com&quot;

# When to start the service
start on runlevel [2345]

# When to stop the service
stop on shutdown

# Automatically restart process if crashed
respawn
respawn limit 10 5

# Essentially lets upstart know the process will detach itself to the background
expect fork

# drop root proviliges and switch to mymetorapp user
setuid &lt;user&gt;
setgid &lt;user&gt;

script
    export PATH=&apos;/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&apos;
    export NODE_BIN=&apos;/usr/local/bin&apos;
    export PORT=3000
    # this allows Meteor to figure out correct IP address of visitors
    export HTTP_FORWARDED_COUNT=1
    export MONGO_URL=mongodb://&lt;mongo User&gt;:&lt;mongo password&gt;@127.0.0.1:27017/&lt;mongo DB&gt;
    export ROOT_URL=https://&lt;domain&gt;
    # this is for hot code push
    export AUTOUPDATE_VERSION=0.0.1
    exec node ~/bundle/main.js &gt;&gt; ~/meteor.log
end script
</code></pre>
<h2 id="references">References</h2>
<ol>
<li><a href="https://www.meteor.com/tutorials/blaze/security-with-methods?ref=twcloud.tech">Insecure Meteor Tutorial</a></li>
<li><a href="https://www.meteor.com/tutorials/blaze/publish-and-subscribe?ref=twcloud.tech">Autopublish Meteor Tutorial</a></li>
<li><a href="https://github.com/meteor/meteor/blob/bc6bfceacf767ae878a20cf164f26b9cbf96493d/packages/autoupdate/QA.md?ref=twcloud.tech#autoupdate_version">Meteor Hot Code Push FAQ</a></li>
<li><a href="https://github.com/meteor/meteor/issues/3698?ref=twcloud.tech">Meteor Github issue</a></li>
</ol>
]]></content:encoded></item><item><title><![CDATA[How Your Online Accounts Are Hacked]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h2 id="methodstohackanonlineaccount">Methods to Hack an Online Account</h2>
<p>There are many ways how an attacker gain access to our online accounts, depending on the implementation of an application, some methods work better than others:</p>
<ul>
<li>Session Hijacking
<ul>
<li>Also known as cookie stealing, where an attacker steal your cookie to gain access to your</li></ul></li></ul>]]></description><link>https://twcloud.tech/2016/06/19/how-your-online-accounts-are-hacked/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f424</guid><category><![CDATA[Security]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sun, 19 Jun 2016 16:28:51 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="methodstohackanonlineaccount">Methods to Hack an Online Account</h2>
<p>There are many ways how an attacker gain access to our online accounts, depending on the implementation of an application, some methods work better than others:</p>
<ul>
<li>Session Hijacking
<ul>
<li>Also known as cookie stealing, where an attacker steal your cookie to gain access to your account.</li>
</ul>
</li>
<li>Phishing
<ul>
<li>This method sometimes combine with session hijacking to steal your cookies. It can be a forged Facebook site that ask you to login in order to get your password</li>
</ul>
</li>
<li>Password Reset
<ul>
<li>The intention of this feature is to recover a password that is lost, but some attackers will try to use it to gain access to your account by answering few questions, that&apos;s why it is important to set up a difficult questions and answers for recovery.</li>
</ul>
</li>
<li>Keyloggers
<ul>
<li>Unless your computer is rigged or compromised, this method is less common. A keylogger is a software or hardware that will record all your keystrokes (all the keys you typed on the keyboard).</li>
</ul>
</li>
</ul>
<p>For the sake of this article, I will only talk about the most common method (Session Hijacking) and the mistakes that many users made that got their cookies revealed.</p>
<h2 id="sessionhijacking">Session Hijacking</h2>
<h3 id="howsessionsworked">How Sessions Worked</h3>
<p>Once you are logged in to an online service such as Facebook or Gmail, the server will create a unique session ID that identify your account, and sends it back to your browser. Once your browser got it, it will store the session ID in the cookies.</p>
<p>Every time a request is sent, your browser will send the cookies back to the server. So, when someone is able to get your cookies, they can login to your account without knowing your password.</p>
<h3 id="simulatingsessionhijack">Simulating Session Hijack</h3>
<p>I will be using Chrome as the primary demonstration here with your logged in Facebook account. It should be applicable to other browsers too:</p>
<ol>
<li>Once you logged in to your Facebook account, open Chrome Developer Tools in <code>Menu &gt; More Tools &gt; Developer Tools</code></li>
<li>From the tabs on the top, select <code>Resources</code></li>
<li>On the left bar, click on <code>Cookies</code> dropdown and select <code>www.facebook.com</code></li>
<li>Now, open a new incognito window (<code>Menu &gt; New Incognito Window</code>) and go to <code>facebook.com</code>, it shouldn&apos;t be logged in</li>
<li>On the incognito window, open <code>Developer Tools</code> same as step 1, and select <code>Console</code> tab on the top</li>
<li>Click on the <code>Console</code> <code>&gt;</code> prompt and enter <code>document.cookie=&quot;c_user=&lt;copy value from your logged in Facebook c_user cookie&gt;&quot;</code></li>
<li>Then, enter <code>document.cookie=&quot;xs=&lt;copy value from your logged in Facebook xs cookie&gt;&quot;</code></li>
<li>Refresh the page and Voila! You are mysteriously logged in to your Facebook account!</li>
</ol>
<p>Here, I used Incognito tab to simulate the attack, you can however copy the cookie values of <code>xs</code> and <code>c_user</code> to another computer, and continue from step 6.</p>
<h3 id="howyourcookiesarestolen">How Your Cookies are Stolen?</h3>
<p>TCP/IP aren&apos;t designed to be secured, nobody predicted that it would grow to what it is today. It was initially designed for communication between few different computers, but the original design was improved overtime to include every single computers in the world. There are 2 ways to get your browser cookies:</p>
<ol>
<li>Remotely - where an attacker try to get your browser to send the cookies to an attacker website such as using a forged login page.</li>
<li>Locally - where an attacker try to redirect all traffic to the attacker&apos;s computer (A.K.A Man-In-The-Middle attack) and a software will parse the traffic to scan for popular sites session IDs.</li>
</ol>
<p>Remote attacks for cookie stealing are not very common, reason being that if you can forge a website, why not ask the victim for their passwords instead of stealing their cookies. It is however commonly used in spear phishing attacks, where the attacker got a designated victims and they wanted to simulate certain environments without looking suspicious.</p>
<h4 id="publicwifiisevil">Public WiFi is Evil</h4>
<p>The sad truth about WiFi is that it function very similar to a hub. Yeah, it&apos;s encrypted and yes it&apos;s not easy to get WiFi encryption keys especially WPA encryption method with military grade encryption algorithms. Breaking a military grade encryption take years with brute-force, but who said anything about breaking any keys? Anyone who knows a particular &quot;WiFi Password&quot; literally has the key to decrypt any client&apos;s traffic within that network (WEP or WPA personal only, not for WPA enterprise). In other words, all the attacker need to do is to get connected to a particular public WiFi network, and start sniffing/capturing all traffic, AP&apos;s hub behavior makes this easy, as the router will broadcast a packet to all clients and each client will check whether the packet is destined for them and discard it if it&apos;s not.</p>
<p>To reduce suspicion, the attacker can choose to save the traffic into a file first, then harvest all cookie IDs from that file. Once they got the Session IDs, they gain access to the victim&apos;s account without the victim&apos;s knowledge, provided that the victim haven&apos;t logged out of their account.</p>
<h4 id="sslhttpsisnotfoolproof">SSL (HTTPS) is Not Foolproof</h4>
<p>SSL offer traffic encryption to its users, through PKI, where a traffic is encrypted with the private key can only be decrypted with public key or vice versa, 2 keys are required to complete a cycle. Once the browser got your certificate where it contains all the essential information to communicate with the server in a secure manner, it will check with the issuer whether that certificate is really what it says it is. If something is fishy about the certificate, the browser will warn its user that the certificate is invalid. Most users will ignore the warning, because they have no idea what it is. That is how they expose their supposedly secured traffic.</p>
<h4 id="howsslfailed">How SSL Failed</h4>
<p>When someone wanted to steal traffic from a secured site, all they need is the private key to decrypt the client&apos;s traffic, but it is only available in the server. So how they can get the private key without gaining access to the server?</p>
<p>One way is to create a self-signed certificate, in other words, they make themselves a CA, but since they aren&apos;t a CA recognized by the browser, the browser will still warn its user of an invalid CA. Once they created the certificate, they own a private key now.</p>
<p>The problem now is how to replace the original certificate with this self-signed certificate, so the attacker can decrypt their traffic? By replacing the router&apos;s ARP cache for all clients with the attacker&apos;s MAC address. So now the attacker is everyone, all traffic from the router will go to the attacker&apos;s computer.</p>
<p>Once the attacker got the traffic, a tool will check the traffic whether is for a targeted website, if it is, it will replace the certificate with a self-signed version and send it back to the intended client. If it isn&apos;t, it send the traffic back to its original client. That is how users&apos; session IDs are exposed if they ignore the certificate warnings.</p>
<h3 id="howtoprotectyourselffromsessionhijackingandcookiestealing">How to Protect Yourself from Session Hijacking and Cookie Stealing</h3>
<ol>
<li>Avoid public WiFi or at least avoid using public WiFi to login to your online accounts, such as Facebook</li>
<li>Always logout of your account, your session ID is only renewed when you re-login again, especially after login to any online services</li>
<li>DO NOT ignore browser warnings</li>
<li>Use a detection tool to detect if there are anything fishy in the network before using it, Blacksheep is a good tool to detect session hijacks</li>
<li>Use a VPN or TOR or anything that will encrypt your traffic before using any public WiFi</li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Access Router Web Management Interface Remotely Through SSH]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>As an administrator for multiple Linux servers in various location, I find it<br>
hard to manage port forwarding in routers, of course you can configure a<br>
remote GUI login such as RDP and VNC but all these servers only enable<br>
services they need with SSH as their main method of</p>]]></description><link>https://twcloud.tech/2014/03/19/access-router-web-management-interface-remotely-through-ssh/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f42a</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Wed, 19 Mar 2014 03:48:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>As an administrator for multiple Linux servers in various location, I find it<br>
hard to manage port forwarding in routers, of course you can configure a<br>
remote GUI login such as RDP and VNC but all these servers only enable<br>
services they need with SSH as their main method of remote administration<br>
to these servers. Then, I found out about SSH tunneling, which is a lifesaver.<br>
There are two main method to tunnel traffic through SSH:</p>
<ol>
<li><a href="http://superuser.com/questions/330131/ssh-tunnel-to-home-network-and-access-router-web-interface?ref=twcloud.tech">Forward a single port through SSH</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Secure_Shell?ref=twcloud.tech#Encrypted_SOCKS_tunnel">Tunneling through SOCKS</a></li>
</ol>
<p>The first method is good for temporary tunneling and the second method is good<br>
for when you want to encrypt your traffic. So, I think first method is more<br>
suitable to my case, but you are free to use the second method as well.</p>
<h2 id="method1forwardasingleportthroughssh">Method 1: Forward a single port through SSH</h2>
<p>Make sure you know your router IP before you run SSH command, you can just SSH<br>
into your remote machine to get the default using <code>route</code> command. After you<br>
know your remote router IP, you can configure your tunnel using <code>ssh</code> command:</p>
<pre><code>ssh -p [remote ssh port if changed] -L8080:[remote router IP]:80 [username]@[host]
</code></pre>
<p>Make sure that you fill in those variables in square brackets. After entering<br>
your user password, you can open your browser and access your router web management<br>
through <code>127.0.0.1:8080</code>. Make sure that your firewall/iptables are not blocking port 8080.</p>
<h2 id="method2tunnelingthroughsocks">Method 2: Tunneling through SOCKS</h2>
<p>This method forward ports according to the port you specified in your browser<br>
or application. However, it is more complex and usually used to encrypt your<br>
traffic. You create a SOCKS tunnel using <code>ssh</code> command:</p>
<pre><code>ssh -TND 4711 [username]@[host]
</code></pre>
<p>Make sure that you fill in those variables in square brackets. After entering<br>
your user password, you need to configure your browser to use SOCKS proxy, you<br>
can find more information on how to configure web browser for SOCKS proxy<br>
<a href="https://wiki.archlinux.org/index.php/Secure_Shell?ref=twcloud.tech#Encrypted_SOCKS_tunnel">here</a>.<br>
Once configured, you need to enter the router&apos;s IP to access the router&apos;s web<br>
management interface.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Blueproximity for KDE4 Configuration]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>BlueProximity is a great addition to Linux, it adds a little of extra security<br>
for you Linux machine. For those who don&apos;t know what it is, here&apos;s an excerpt<br>
from BlueProximity:</p>
<p><a href="http://blueproximity.sourceforge.net/?ref=twcloud.tech">BlueProximity</a> helps you add a little more security to your desktop. It does so<br>
by</p>]]></description><link>https://twcloud.tech/2014/02/15/blueproximity-for-kde4-configuration/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f429</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sat, 15 Feb 2014 03:46:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>BlueProximity is a great addition to Linux, it adds a little of extra security<br>
for you Linux machine. For those who don&apos;t know what it is, here&apos;s an excerpt<br>
from BlueProximity:</p>
<p><a href="http://blueproximity.sourceforge.net/?ref=twcloud.tech">BlueProximity</a> helps you add a little more security to your desktop. It does so<br>
by detecting one of your bluetooth devices, most likely your mobile phone, and<br>
keeping track of its distance. If you move away from your computer and the<br>
distance is above a certain level (no measurement in meters is possible) for a<br>
given time, it automatically locks your desktop (or starts any other shell<br>
command you want).</p>
<p>Once away your computer awaits its master back - if you are nearer than a given level for a set time your computer unlocks magically without any interaction (or starts any other shell command you want).</p>
<p>The problem I found when configuring BlueProximity on my Linux machine is the commands, if you are on Gnome, nearly no configuration is needed, but I am on KDE 4.x. So, here&apos;s the command I used to get it working on my machine:</p>
<ul>
<li>Lock command: <code>qdbus-qt4 org.kde.screensaver /ScreenSaver Lock</code></li>
<li>Unlock command: <code>qdbus-qt4 | grep kscreenlocker_greet | xargs -I {} qdbus-qt4 {} /MainApplication quit</code></li>
<li>Proximity command: <code>qdbus-qt4 org.freedesktop.ScreenSaver /ScreenSaver SimulateUserActivity</code></li>
</ul>
<p>If you are not using <code>qdbus-qt4</code>, substitute it with <code>qdbus</code>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Enable Internet Access on Raspberry Pi Through Wired Connection]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The first I got my hands on a raspberry pi, I wanted to update its OS to the latest, but the problem is that my RPi is connected to my laptop (using OpenSuse) and I am using SSH to access it (I don&apos;t have access to any displays)</p>]]></description><link>https://twcloud.tech/2013/06/22/enable-internet-access-on-raspberry-pi-through-wired-connection/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f428</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sat, 22 Jun 2013 03:43:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The first I got my hands on a raspberry pi, I wanted to update its OS to the latest, but the problem is that my RPi is connected to my laptop (using OpenSuse) and I am using SSH to access it (I don&apos;t have access to any displays).</p>
<h2 id="preparation">Preparation##</h2>
<p>First, prepare your SDCard for RPi, there are plenty of resources on how to prepare your SDCard. Then, follow <a href="http://www.raspberrypi.org/archives/3760?ref=twcloud.tech">this guide</a> to configure your RPi for SSH access from your machine.</p>
<h3 id="ipaddressing">IP Addressing###</h3>
<p>For this guide, I am using the following addressing scheme:</p>
<ul>
<li>RPi eth0 =&gt; 192.168.10.2/24</li>
<li>Laptop eth0 (connected to RPi) =&gt; 192.168.10.1/24</li>
<li>Laptop wlan0 (connected to router, default gateway) =&gt; 192.168.1.6/24</li>
<li>Router (Internet access) =&gt; 192.168.1.254/24</li>
</ul>
<h2 id="fixingdefaultgatewayonmylaptop">Fixing Default Gateway on my Laptop##</h2>
<p>When I connected to my RPi, Network Manager replaced my default gateway to <code>eth0</code>, so I am not able to access the Internet after connected to RPi. You verify it by using <code>route</code> command without parameters, if the default line points to eth0, it means your default gateway is overwritten, if it&apos;s not, it means you can skip this section, you can verify it by ping-ing <code>www.google.com</code>. To fix this problem, I deleted the default gateway and added a new default gateway that points to my router:</p>
<pre><code>$ sudo /sbin/route del default
$ sudo /sbin/route add default gw 192.168.1.254 dev wlan0
</code></pre>
<p>Verify that your default gateway now points to wlan0 using <code>route</code> command without parameters. Try to ping <code>www.google.com</code> to make sure that your laptop have access to the Internet, your RPi won&apos;t be able to access the Internet if your laptop does not.</p>
<h2 id="addingdefaultgatewaytorpi">Adding Default Gateway to RPi##</h2>
<p>Using the <code>route command</code> add a default that points to eth0 interface:</p>
<pre><code>$ sudo route add default gw 192.168.10.1 dev eth0
</code></pre>
<h2 id="theproblem">The Problem##</h2>
<p>Out of the box without masquerading, once a packet sourced from 192.168.10.0/24 network (my RPi &lt;==&gt; laptop subnet) reaches the router on subnet 192.168.1.0/24, depends on the configuration, it will route or drop the packet. Even if the router successfully routed the packet to the Internet, it will definitely drop the packet, because it does not know where to route the packet for network 192.168.10.0/24 network. So, masquerading seems to be the only solution here.</p>
<h2 id="masqueradingandconfiguration">Masquerading and Configuration##</h2>
<p>Masquerading is like that Linux version of NAT, it translates your internal network to external network (e.g. for Internet access). What it does is that any packets bound for any network from 192.168.10.0/24 subnet will be translated to 192.168.1.0/24 subnet, so my router knows where to route my 192.168.10.0/24 packet.</p>
<p>Since I am using OpenSUSE, I will configure masquerading through its own firewall using YaST.</p>
<ol>
<li>Open up firewall configuration in YaST and select <code>Interfaces</code></li>
<li>Double click eth0 interface and change it to <code>External Zone</code></li>
<li>Select <code>Masquerading</code> on the left and click <code>Masquerade Networks</code></li>
<li>Add <code>80</code> to <code>requested port</code></li>
<li>Add <code>192.168.1.6</code> (or your wlan0 IP) to <code>Redirect to Masqueraded IP</code></li>
<li>Add <code>81</code> to <code>Redirect to Port</code> or any ports you want your traffic to be translated to, but make sure you are not using any servers on that port</li>
<li>Click <code>Add</code></li>
<li>Select <code>Startup</code> on the left and click <code>Save Settings and Restart Firewall Now</code></li>
<li>Try to ping <code>www.google.com</code> from your RPi, you should get be able to at this point</li>
</ol>
<h2 id="references">REFERENCES##</h2>
<ol>
<li><a href="http://www.raspberrypi.org/archives/3760?ref=twcloud.tech">Using your desktop or laptop screen and keyboard with your Pi</a></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Playing Multiple Tracks VCD on Linux]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I have bought a VCD movie recently to find that it was not playable on my OpenSUSE, CD2 works fine but not CD1. Then I go back for a replacement disc on the shop and came back to watch it but unfortunately, the problem persist. The problem is that CD1</p>]]></description><link>https://twcloud.tech/2013/06/16/playing-multiple-tracks-vcd-on-linux/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f427</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sun, 16 Jun 2013 03:41:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I have bought a VCD movie recently to find that it was not playable on my OpenSUSE, CD2 works fine but not CD1. Then I go back for a replacement disc on the shop and came back to watch it but unfortunately, the problem persist. The problem is that CD1 comtains multiple tracks that most Mediaplayer doesn&apos;t support (tried VLC and Kaffeine). So I started to dive into the problems and found a thread that provides useful information on playing VCD on Linux (Refer to references).</p>
<p>Basically there are two ways to work around this problem:</p>
<ol>
<li>Use MPlayer to play your disc</li>
<li>Rip your VCD using vcdxrip</li>
</ol>
<h2 id="1usemplayertoplayyourdics">1. Use MPlayer to play your dics##</h2>
<p>MPlayer was quite new to me actually, it is launched through the console, and it is does not appear in the menu. Here is the command to play VCD using MPlayer (install if you haven&apos;t already):</p>
<pre><code>mplayer vcd://&lt;track number&gt; -cdrom-device /dev/&lt;your cd rom device&gt;
</code></pre>
<p>Of course, you have to substitute <code>&lt;track number&gt;</code> and <code>&lt;your cd rom device&gt;</code> with the track number and your cd rom device. On my machine, it&apos;s on track 3 and my cd rom device is either dvd or sr0 (dvd is a symlink to sr0). So my complete command is:</p>
<pre><code>mplayer vcd://3 -cdrom-device /dev/sr0
</code></pre>
<p>Refer to mplayer&apos;s manpage for controls and details: <code>man mplayer</code>.</p>
<h2 id="2ripyourvcdusingvcdxrip">2. Rip your VCD using vcdxrip##</h2>
<p>This has not personally worked for me because it keeps stopping at certain points, it just doesn&apos;t copy the whole VCD. But it is still good to know if MPlayer does not work. The command is very easy and straight forward. Remember to change to a directory for your ripped files before executing this command:</p>
<pre><code>vcdxrip -C
</code></pre>
<p>And there will be multiple files present on your currect directory, where you could play using VLC or whatever media player you may wish.</p>
<h2 id="references">REFERENCES##</h2>
<ol>
<li><a href="http://www.mplayerhq.hu/DOCS/HTML/en/vcd.html?ref=twcloud.tech">MPlayer with VCD</a></li>
<li><a href="http://forums.linuxmint.com/viewtopic.php?f=48&amp;t=43106&amp;ref=twcloud.tech">Forums discussing issues with VCDs</a></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using Juniper VPN on 64-bit Linux]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When I was supporting a company that uses Juniper VPN with my colleague, I found that Juniper VPN is only supported in 32-bit version of Linux (though it was supported in 64-bit Windows and Mac machine, should ask them why they don&apos;t compile it for 64-bit Linux). I</p>]]></description><link>https://twcloud.tech/2013/05/18/using-juniper-vpn-on-64-bit-linux/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f426</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sat, 18 May 2013 03:40:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When I was supporting a company that uses Juniper VPN with my colleague, I found that Juniper VPN is only supported in 32-bit version of Linux (though it was supported in 64-bit Windows and Mac machine, should ask them why they don&apos;t compile it for 64-bit Linux). I have spent hours finding solution to this situation and found one particular solution that just works. Tried <a href="http://mad-scientist.net/juniper.html?ref=twcloud.tech">Mad Scientist&apos;s JNC (Juniper Network Connect)</a> but it didn&apos;t work for some unknown reasons.</p>
<p>This solution is based on <a href="http://dominique.leuenberger.net/blog/2010/07/juniper-vpn-on-opensuse-x86_64/?ref=twcloud.tech">Dominique Leuenberger&apos;s blog on &apos;Juniper VPN on openSUSE x86_64&apos;</a>, all credits goes to him/her.</p>
<h2 id="requirements">Requirements##</h2>
<p>To use Juniper VPN, JRE or JDK with web plugins is a must (<a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html?ref=twcloud.tech">Download Here</a>), it does not work with IcedTea and openJDK. We are not using any third party solution, so we have to comply to the Juniper VPN&apos;s system requirements.</p>
<h2 id="steps">Steps##</h2>
<p>1. Download Juniper VPN through the software provided by the company. Once the applet is loaded, it should ask you for your root/su password, just press [Enter] twice. It will create <code>.juniper_networks</code> in your home directory.<br>
<img src="https://twcloud.tech/content/images/2016/06/juniper-loading.png" alt="Juniper Loading Screen" loading="lazy"><br>
<img src="https://twcloud.tech/content/images/2016/06/juniper-pwdprompt.png" alt="Juniper Password Prompt" loading="lazy"></p>
<p>2. Change directory to <code>$HOME/.juniper_networks</code></p>
<pre><code>cd $HOME/.juniper_networks
</code></pre>
<p>3. Remove <code>network_connect</code> directory</p>
<pre><code>rm -rf network_connect
</code></pre>
<p>4. Extract <code>ncLinuxApp.jar</code></p>
<pre><code>unzip ncLinuxApp.jar
</code></pre>
<p>5. Use <code>ldd</code> to find out required libraries for <code>network_connect/libncui.so</code> and <code>zypper wp &lt;library&gt;</code> or <code>yum provides &lt;library&gt;</code> to find out the libraries.</p>
<p>6. Make a binary out of the library</p>
<pre><code>gcc -m32 -Wl,-rpath,`pwd` -o network_connect/ncui network_connect/libncui.so
</code></pre>
<p>7. Set permission and owner/group</p>
<pre><code>sudo chown root:root network_connect/ncui
sudo chmod 6711 network_connect/ncui
</code></pre>
<p>8. Get the certificate</p>
<pre><code>sh network_connect/ncui &lt;your Juniper VPN host&gt; &lt;certificatename&gt;.cer
</code></pre>
<p>9. Make sure that you are still logged into you VPN host, and find your DSID by browsing though your browser&apos;s cookie of your VPN site. Search for the cookie named DSID</p>
<p>10. Connect to Juniper VPN</p>
<pre><code>network_connect/ncui -h &lt;you Juniper VPN host&gt; -c DSID=&lt;value obtained in step 9&gt; -f &lt;certificate obtained in step 8&gt;.cer
</code></pre>
<p>11. (Optional) To ease future VPN connections, copy and paste the following script to <code>$HOME/bin/vpnConnect</code></p>
<pre><code>#!/bin/bash

if [ $# -lt 1 ]; then
        echo -e &quot;Usage:\t$0 &lt;DSID&gt;&quot;
        echo -e &quot;\n\tNOTE: DSID can be found in the cookie after you logged into your VPN site&quot;
        exit 0
fi

# Connect to your VPN
~/.juniper_networks/network_connect/ncui -h &lt;your vpn host&gt; -c DSID=$1 -f ~/.juniper_networks/&lt;cert from step 8&gt;.cer
</code></pre>
<p>12. (Continue step 11) Add executable bit <code>chmod +x $HOME/bin/vpnConnect</code></p>
<p>13. (To connect after step 12) Use <code>vpnConnect &lt;your DSID as in step 9&gt;</code> to connect</p>
<h2 id="alternativewaysforshortening">Alternative ways for shortening##</h2>
<p>Personally I prefer to use a script to shorten my commands, because it allow me to specify usage notes and comments when the usage is not right, but if you are not like me, you can use Linux aliases to shorten it, refer to <code>man alias</code> for usage or Google it =)</p>
<h2 id="references">REFERENCES##</h2>
<p>1. <a href="http://mad-scientist.net/juniper.html?ref=twcloud.tech">Mad Scientist&apos;s JNC (Juniper Network Connect)</a><br>
2. <a href="http://dominique.leuenberger.net/blog/2010/07/juniper-vpn-on-opensuse-x86_64/?ref=twcloud.tech">Dominique Leuenberger&apos;s blog on &apos;Juniper VPN on openSUSE x86_64&apos;</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Optimizing Linux Power Usage]]></title><description><![CDATA[<p>A lot of Linux distro is not optimised for laptops, some of them could use up much power if you don&apos;t optimize it. By the end of this guide, you should be able to reduce your power consumption by 3-5 watts, I know it seems not much but</p>]]></description><link>https://twcloud.tech/2013/05/17/optimizing-linux-power-usage/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f425</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Fri, 17 May 2013 03:36:00 GMT</pubDate><content:encoded><![CDATA[<p>A lot of Linux distro is not optimised for laptops, some of them could use up much power if you don&apos;t optimize it. By the end of this guide, you should be able to reduce your power consumption by 3-5 watts, I know it seems not much but it can give my machine 15-30 mins more power. By default, my machine used up more than 24 watts of power, as indicated by <code>powertop</code>. To find out the power usage of your machine:</p>
<!-- more -->
<p>1. Unplug or switch off your AC to your laptop<br>
2. Enter <code>sudo /usr/sbin/powertop</code> in a terminal<br>
<img src="https://twcloud.tech/content/images/2016/06/powertopOverview.png" alt="Powertop" loading="lazy"></p>
<p>To reduce the power consumption, I have installed <code>laptop-mode-tools</code>, as its name suggest, it is a tool for laptops. Once installed, I found that my wireless driver (<code>ath5k</code>) does not support power saving mode yet, so I have to disable it. Edit <code>/etc/laptop-mode/</code> and change <code>WIRELESS_BATT_POWER_SAVING=1 to WIRELESS_BATT_POWER_SAVING=0</code>, this step is optional, nothing might happen if you don&apos;t do anything to it.</p>
<p>To enable <code>laptop-mode</code>:</p>
<pre><code>$ sudo systemctl enable laptop-mode.service
</code></pre>
<p>Next, I created a custom script for <code>laptop-mode-tools</code> to enable certain power saving not included in <code>laptop-mode-tools</code> modules:</p>
<p>1. Edit <code>$HOME/bin/powersaving_on</code> and add the following lines:</p>
<pre><code>#!/bin/sh

# ATI Radeon power saving
echo profile &gt; /sys/class/drm/card0/device/power_method
echo low &gt; /sys/class/drm/card0/device/power_profile

# Audio power saving
echo 1 &gt; /sys/module/snd_hda_intel/parameters/power_save
echo Y &gt; /sys/module/snd_hda_intel/parameters/power_save_controller

# Writeback time
echo 1500 &gt; /proc/sys/vm/dirty_writeback_centisecs
</code></pre>
<p>2. Edit <code>$HOME/bin/powersaving_off</code> and add the following lines:</p>
<pre><code>#!/bin/sh

# ATI Radeon power saving
echo profile &gt; /sys/class/drm/card0/device/power_method
echo default &gt; /sys/class/drm/card0/device/power_profile

# Audio power saving
echo 2 &gt; /sys/module/snd_hda_intel/parameters/power_save
echo N &gt; /sys/module/snd_hda_intel/parameters/power_save_controller

# Writeback time
echo 500 &gt; /proc/sys/vm/dirty_writeback_centisecs
</code></pre>
<p>3. Add executable bit to both scripts:</p>
<pre><code>$ chmod +x $HOME/bin/powersaving_on; chmod +x $HOME/bin/powersaving_off
</code></pre>
<p>4. Create symbolic links for laptop-mode-tools:</p>
<pre><code>$ sudo ln -s /home/&lt;username&gt;/bin/powersaving_on /etc/laptop-mode/batt-start/powersaving_on;\
sudo ln -s /home/&lt;username&gt;/bin/powersaving_on /etc/laptop-mode/lm-ac-stop/powersaving_on;\
sudo ln -s /home/&lt;username&gt;/bin/powersaving_off /etc/laptop-mode/batt-stop/powersaving_off;\
sudo ln -s /home/&lt;username&gt;/bin/powersaving_off /etc/laptop-mode/lm-ac-start/powersaving_off;\
sudo ln -s /home/&lt;username&gt;/bin/powersaving_off /etc/laptop-mode/nolm-ac-start/powersaving_off;\
sudo ln -s /home/&lt;username&gt;/bin/powersaving_off /etc/laptop-mode/nolm-ac-stop/powersaving_off
</code></pre>
<p><strong>Explanation and Notes:</strong><br>
<em>Step 1</em>: Enable some powersaving features to reduce power usage (require root permission), see the script&apos;s comments. You can change <code>echo low &gt; /sys/class/drm/card0/device/power_profile</code> to <code>echo mid &gt; /sys/class/drm/card0/device/power_profile</code> if you need more power</p>
<p><em>Step 2</em>: Disable powersaving features by setting all its values to default</p>
<p><em>Step 3</em>: Make both scripts executable</p>
<p><em>Step 4</em>: I have wrote it in a way that you can cut and paste into your terminal emulator in one step, just replace <code>&lt;username&gt;</code> with your username. <code>laptop-mode-tools</code> provide a way for users to execute certain scripts when on AC or battery by placing your scripts in its corresponding directories:</p>
<ul>
<li>/etc/laptop-mode/batt-start: Executed when laptop enters battery mode</li>
<li>/etc/laptop-mode/batt-stop: Executed when laptop exits battery mode</li>
<li>/etc/laptop-mode/lm-ac-start: Executed when <code>laptop-mode</code> is enabled AND laptop enters AC mode</li>
<li>/etc/laptop-mode/lm-ac-stop: Executed when <code>laptop-mode</code> is enabled AND laptop exits AC mode</li>
<li>/etc/laptop-mode/nolm-ac-start: Executed when <code>laptop-mode</code> is disabled through <code>/etc/laptop-mode/laptop-mode.conf</code> AND laptop enters AC mode</li>
<li>/etc/laptop-mode/nolm-ac-stop: Executed when <code>laptop-mode</code> is disable through <code>/etc/laptop-mode/laptop-mode.conf</code> AND laptop exits AC mode</li>
</ul>
<p><strong>Other Tips:</strong><br>
1. Disable bluetooth: <code>sudo rfkill block bluetooth</code><br>
2. It seems that monitor used up most power (11-18 watts depending on brightness on my machine), reduce brightness to save more power<br>
3. Another power killer is WiFi, (more than 6 watts on my machine), so turn it off if you don&apos;t use it</p>
<p><strong>REFERENCES:</strong><br>
1. <a href="https://wiki.archlinux.org/index.php/Power_saving?ref=twcloud.tech">Great ArchWiki Article on Power Saving</a><br>
2. <a href="http://aubreypwd.com/blog/2012/09/14/howto-ubuntu-12-04-open-source-radeon-drivers-and-power-management/?ref=twcloud.tech">ATI Radeon Power Management Guide</a><br>
3. <a href="http://www.linuxjournal.com/article/7539?page=0%2C1&amp;ref=twcloud.tech">Linux Journal Article on laptop-mode-tools</a><br>
4. <a href="http://www.overclock.net/t/731469/how-to-power-saving-with-the-radeon-driver?ref=twcloud.tech">Using ATI Radeon Power Management with laptop-mode-tools</a></p>
]]></content:encoded></item><item><title><![CDATA[Change the Default (S2RAM) Suspend Module to Uswsusp]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>If you have any issues to suspend your laptop e.g. <code>suspend</code> command doesn&apos;t work on your laptop, try changing the default sleep module to <code>uswsusp</code>:</p>
<ol>
<li>
<p>Edit <code>/etc/pm/config.d/module</code> and add the following line:<br>
<code>SLEEP_MODULE=uswsusp</code></p>
</li>
<li>
<p>Edit <code>/etc/pm/config.d/defaults</code> and add</p></li></ol>]]></description><link>https://twcloud.tech/2013/05/12/change-default-suspend-method-to-s2ram/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f423</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Sun, 12 May 2013 09:57:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>If you have any issues to suspend your laptop e.g. <code>suspend</code> command doesn&apos;t work on your laptop, try changing the default sleep module to <code>uswsusp</code>:</p>
<ol>
<li>
<p>Edit <code>/etc/pm/config.d/module</code> and add the following line:<br>
<code>SLEEP_MODULE=uswsusp</code></p>
</li>
<li>
<p>Edit <code>/etc/pm/config.d/defaults</code> and add the following line:<br>
<code>S2RAM_OPTS=&quot;-f&quot;</code></p>
</li>
<li>
<p>Reboot and try to let her sleep.</p>
</li>
</ol>
<h2 id="references">References##</h2>
<ul>
<li><a href="http://en.opensuse.org/SDB:Suspend_to_RAM?ref=twcloud.tech">OpenSuse Documentation on Suspending</a></li>
<li><a href="http://askubuntu.com/questions/54591/use-s2ram-when-closing-lid-with-kde?ref=twcloud.tech">AskUbuntu Thread</a></li>
</ul>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Fixing incorrect lid state]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When I install a Linux distro to my VAIO notebook, I found that there is an annoying bug with the lid switch. It does not get updated whenever I suspend on lid close, it means <code>cat /proc/acpi/button/lid/LID/state</code> will output <code>state:	close</code>. When I close the</p>]]></description><link>https://twcloud.tech/2013/04/29/fixing-incorrect-lid-state/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f42b</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Mon, 29 Apr 2013 03:50:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>When I install a Linux distro to my VAIO notebook, I found that there is an annoying bug with the lid switch. It does not get updated whenever I suspend on lid close, it means <code>cat /proc/acpi/button/lid/LID/state</code> will output <code>state:	close</code>. When I close the lid again, it won&apos;t suspend, instead, it will change the state to open. So in order for it to suspend again on lid close after the first suspend, I have to close it, reopen the lid and close it again.</p>
<p>I have tried installing Linux Mint, Fedora, Fuduntu and Xubuntu, but it is not fixed in any of the distros. So, I don&apos;t think it is distro problems. While researching this issues (which I spent two full days), I found that Linux got an amazing feature that enable users to dynamically loading DSDT at boot time, there is no need to update the BIOS. So here&apos;s the instuctions:</p>
<p>1. Install <code>iasl</code> using <code>yum</code>, <code>apt-get</code> or whatever package management you are using.</p>
<p>2. Extract DSDT:</p>
<pre><code>$ sudo cat /sys/firmware/acpi/tables/DSDT &gt; dsdt.aml
</code></pre>
<p>3. Disassemble <code>dsdt.aml</code> using the following command, this should create a new file <code>dsdt.dsl</code>:</p>
<pre><code>$ iasl -d dsdt.aml
</code></pre>
<p>4. Compile it using:</p>
<pre><code>$ iasl -tc dsdt.dsl
</code></pre>
<p>5. Fix any compiler errors, warnings and remarks. On my machine, the output is:</p>
<pre><code>dsdt.dsl  1352:                         And (CTRL, 0x1E)
Warning  1106 -                                 ^ Result is not used, operator has no effect

dsdt.dsl  1584:                     0x00000000,         // Length
Error    4122 -                              ^ Invalid combination of Length and Min/Max fixed flags

dsdt.dsl  2443:                                 Name (_T_0, 0x00)
Remark   5111 -            Use of compiler reserved name ^  (_T_0)

dsdt.dsl  2521:                                 Name (_T_0, 0x00)
Remark   5111 -            Use of compiler reserved name ^  (_T_0)
</code></pre>
<p>a. The first one is on line 1352 can be fixed simply by changing <code>And (CTRL, 0x1E)</code> to <code>And (CTRL, 0x1E, CTRL)</code>.</p>
<p>b. The second one is on line 1584, the length should be <code>Range Maximum</code> - <code>Range Minimum</code> + 1, on my machine, so fire up a hex calculator and start subtracting. On my machine, it&apos;s <code>0xE0000000</code> (<code>0xDFFFFFFF</code> - <code>0x00000000</code> + <code>0x00000001</code>).</p>
<p>c. The third and fourth line is on line 2443 and 2521, because it uses a reserved name, simply replacing all instances of <code>_T_0</code> to <code>T_0</code> will stop the complaints. In vim, it is as simple as issuing <code>:%s/_T_0/T_0/g</code> in command mode.</p>
<p>6. Once everything is fixed (no errors, warning or remarks), add the following line to <code>_WAK</code> method, simply search for <code>_WAK</code> in <code>dsdt.dsl</code>:</p>
<pre><code>If (LNotEqual (0x00, LIDS))
    {
        Store (0x00, LIDS)
        Notify (\_SB.LID, 0x80)
    }
</code></pre>
<p><strong>NOTE 1:</strong> You might need to change <code>\_SB.LID</code> to match your path to <code>LID</code> method or on some machine <code>LID0</code>. Method name is preceded by an <code>_</code> (underscore), so you can search for <code>_LID</code> in <code>dsdt.dsl</code>. After you found it, you have to determine the scope, scroll up until you found <code>Scope</code> keyword that your <code>LID</code> or <code>LID0</code> method belongs to, inside the bracket is the scope name. It may be in more than one scope, so, it might be <code>\_PCI0.SB.LID</code>. If you specify an incorrect path to <code>LID</code> method, you will receive the following error:</p>
<pre><code>dsdt.dsl   300:             Notify (LID, 0x80)
Error    4068 -                       ^ Object is not accessible from this scope (LID_)
</code></pre>
<p><strong>NOTE 2:</strong> What this function does is just to update the lid state once it is resumed from sleep. According to the ACPICA documentation, <code>_WAK</code> method is called by <code>AcpiLeaveSleepState()</code> function of ACPI. If the lid is open, the <code>LIDS</code> variable is <code>0x00</code>, or <code>0x01</code> otherwise. So these few lines translate to &quot;if lid state is not open (closed), change lid state to open and call <code>LID</code> method&quot;.</p>
<p>7. Compile it using <code>iasl -tc dsdt.dsl</code>.</p>
<p>8. If no errors, warnings or remarks, add the following lines to <code>/etc/grub.d/01_acpi</code>:</p>
<pre><code># Uncomment to load custom ACPI table
GRUB_CUSTOM_ACPI=&quot;/boot/dsdt.aml&quot;


# DON&apos;T MODIFY ANYTHING BELOW THIS LINE!


prefix=/usr
exec_prefix=${prefix}
libdir=${exec_prefix}/lib


. /usr/share/grub/grub-mkconfig_lib
#. ${libdir}/grub/grub-mkconfig_lib


# Load custom ACPI table
if [ x${GRUB_CUSTOM_ACPI} != x ] &amp;&amp; [ -f ${GRUB_CUSTOM_ACPI} ] \
	&amp;&amp; is_path_readable_by_grub ${GRUB_CUSTOM_ACPI}; then
    echo &quot;Found custom ACPI table: ${GRUB_CUSTOM_ACPI}&quot; &gt;&amp;2
    prepare_grub_to_access_device `${grub_probe} --target=device ${GRUB_CUSTOM_ACPI}` | sed -e &quot;s/^/  /&quot;
    cat &lt;&lt; EOF
acpi (\$root)`make_system_path_relative_to_its_root ${GRUB_CUSTOM_ACPI}`
EOF
fi
</code></pre>
<p>9. Add executable bit to it:</p>
<pre><code>$ sudo chmod +x /etc/grub.d/01_acpi
</code></pre>
<p>10. Copy the new <code>dsdt.aml</code> to <code>/boot</code>:</p>
<pre><code>$ sudo cp dsdt.aml /boot
</code></pre>
<p>11. Regenerate <code>grub.cfg</code>:</p>
<pre><code>$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
</code></pre>
<p>12. Reboot</p>
<h2 id="references">References##</h2>
<ol>
<li><a href="https://wiki.archlinux.org/index.php/DSDT?ref=twcloud.tech">Archwiki on DSDT</a></li>
<li><a href="https://bugzilla.redhat.com/show_bug.cgi?id=676031&amp;ref=twcloud.tech">Redhat&apos;s Bug Report</a></li>
<li><a href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/34389?ref=twcloud.tech">Ubuntu&apos;s Bug Report 1</a></li>
<li><a href="https://bugs.launchpad.net/ubuntu/+source/linux/+bug/44825?ref=twcloud.tech">Ubuntu&apos;s Bug Report 2</a></li>
<li><a href="http://sadevil.org/blog/2012/01/01/fixing-the-acpi-dsdt-of-an-acer-ferrari-one-200/?ref=twcloud.tech">Somebody&apos;s blog on fixing DSDT errors, remarks and warnings</a></li>
<li><a href="https://www.acpica.org/documentation?ref=twcloud.tech">ACPICA Documentation</a></li>
</ol>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Permanent DNS Settings for All Network Interfaces]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>DNS settings in Linux are usually stored in <code>/etc/resolv.conf</code> file, you could basically just edit this file to change the DNS settings in any Linux systems. However, the change is not permanent, it will be overwritten by Network Manager when u reconnect or reboot. So, to make the</p>]]></description><link>https://twcloud.tech/2013/04/23/permanent-dns-settings-for-all-network-interfaces/</link><guid isPermaLink="false">5bb7970aef7f8c118b09f422</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Ting Wai]]></dc:creator><pubDate>Tue, 23 Apr 2013 10:02:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>DNS settings in Linux are usually stored in <code>/etc/resolv.conf</code> file, you could basically just edit this file to change the DNS settings in any Linux systems. However, the change is not permanent, it will be overwritten by Network Manager when u reconnect or reboot. So, to make the change permanent, there are two methods:</p>
<ul>
<li>Network Manager&#x2019;s dispatcher script method</li>
<li>Immutable attribute method</li>
</ul>
<h2 id="method1networkmanagersdispatcherscriptmethod">Method 1: Network Manager&#x2019;s Dispatcher Script Method</h2>
<p>This method will just replace copy <code>/etc/resolv.conf</code> with <code>/etc/resolv.conf.googledns</code> everytime NetworkManager connect to the network.</p>
<ol>
<li>
<p>Run <code>sudo /etc/resolv.conf.googledns</code></p>
</li>
<li>
<p>Add the following lines, I&apos;m using Google DNS here, you can change to any DNS you want:</p>
<pre><code> nameserver 8.8.8.8
 nameserver 8.8.4.4
</code></pre>
</li>
<li>
<p>Create and edit the file <code>/etc/NetworkManager/dispatcher.d/12-dns_server</code> and add the following line:</p>
<pre><code> sudo cp -f /etc/resolv.conf.googledns /etc/resolv.conf
</code></pre>
</li>
<li>
<p>Make <code>/etc/NetworkManager/dispatcher.d/12-dns_server</code> executable:</p>
<pre><code> sudo chmod +x /etc/NetworkManager/dispatcher.d/12-dns_server
</code></pre>
</li>
<li>
<p>Restart NetworkManager service: <code>sudo systemctl restart NetworkManager</code></p>
</li>
</ol>
<h2 id="method2immutableattributemethod">Method 2: Immutable Attribute Method</h2>
<p>The <em>immutable</em> attribute is part of extended attribute added since <code>ext2</code>. Since <code>resolv.conf</code> file is automatically regenerated every time Network Manager connects, you can set the file as <em>immutable</em> so that it will never be replaced. You can set <code>resolv.conf</code> as immutable using the <code>chattr</code> utility as follows (<em>Remember to edit the file and add all the DNS before setting the attribute</em>):</p>
<pre><code>sudo chattr +i /etc/resolv.conf
</code></pre>
<p>Whenever you wanted to change the DNS, the attribute have to be removed from the file using the following command (similar to previous command, except the <code>+i</code> and <code>-i</code> option which indicates add or remove the attribute):</p>
<pre><code>sudo chattr -i /etc/resolv.conf
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>