GitHub: Make use of Draft Pull Requests

Earlier this month I wrote about about a simple GitHub workflow.

In this post, I want to talk about draft PRs - a powerful feature that can enhance your development process and team collaboration.

What are Draft Pull Requests?

Draft Pull Requests are a GitHub feature that allows you to create a pull request that’s explicitly marked as “not ready for review.” They’re perfect for when you want to:

  • Share work-in-progress with your team
  • Get early feedback on direction
  • Track progress on larger features
  • Prevent accidental merges
  • Enable CI/CD testing before code review

When to Use Draft PRs

I’ve just read Michael Kaufmanns book Accelerate DevOps with GitHub.

The author argues that draft PRs should be created immediately when you start working on code.

This is a great idea if you have the fundamentals within your team up and running already.

Besides this, here’s when to use draft PRs:

1. Large Features or Refactoring

When working on substantial changes that span multiple commits or days:

# Start your feature branch
git checkout -b feature/user-authentication
git push -u origin feature/user-authentication

# Create a draft PR immediately
gh pr create --draft --title "Add user authentication system" --body "WIP: Implementing OAuth2 integration"

2. Early Feedback and Direction

Want to validate your approach before investing too much time:

# Create a draft PR with just the basic structure
gh pr create --draft --title "Proposed API structure" --body "Looking for feedback on this approach before implementing the full feature"

3. Collaborative Development

Working with team members on the same feature:

# Create draft PR for collaborative work
gh pr create --draft --title "Database schema redesign" --body "Collaborating with @teammate on this. Please add your changes to this branch."

Creating Draft PRs

Using the GitHub CLI PR option

# Create a draft PR
gh pr create --draft --title "Your title" --body "Your description"

# Or convert existing PR to draft
gh pr ready --undo

Using GitHub Web Interface

  1. Push your branch to GitHub
  2. Click “Compare & pull request”
  3. Click “Create draft pull request” instead of “Create pull request”

Converting Between Draft and Ready

Draft → Ready for Review

# Using GitHub CLI
gh pr ready

# Or on GitHub web interface
# Click "Ready for review" button

Ready → Draft

# Using GitHub CLI
gh pr ready --undo

# Or on GitHub web interface
# Click "Convert to draft" button

Best Practices

1. Clear Communication

Always explain what you’re working on and what feedback you need:

## WIP: User Authentication System

### What's implemented:
- [x] Basic OAuth2 setup
- [x] User model
- [ ] JWT token handling
- [ ] Password reset flow

### Questions for the team:
- Should we use refresh tokens?
- What's the preferred session timeout?

### Next steps:
- Complete JWT implementation
- Add tests
- Update documentation

2. Regular Updates

Keep your draft PR updated with progress:

# Commit and push regularly
git add .
git commit -m "Add JWT token validation"
git push

# Update PR description as you progress
gh pr edit --body "Updated description with latest progress"

3. Use Labels

Add appropriate labels to your draft PRs:

# Add labels for better organization
gh pr edit --add-label "work-in-progress"
gh pr edit --add-label "needs-review"

I recently automated lable assignment using branch names, which worked great for our team.

Draft PRs vs Regular PRs

Feature Draft PR Regular PR
Merge Protection ✅ Cannot be merged ❌ Can be merged
Review Requests ❌ Cannot request reviews ✅ Can request reviews
CI/CD ✅ Runs normally ✅ Runs normally
Team Visibility ✅ Visible to team ✅ Visible to team
Status Badge 🟡 “Draft” badge 🟢 “Ready for review”

Integration with Your Workflow

Building on the workflow from my previous post, here’s how draft PRs fit in:

Enhanced Team Workflow

  1. Create feature branch (same as before)
  2. Create draft PR immediately (new step)
  3. Work incrementally with regular commits
  4. Convert to ready when complete
  5. Request reviews and iterate
  6. Merge and cleanup (same as before)

Common Patterns

1. Spike PRs

For experimental work or proof-of-concepts:

gh pr create --draft --title "Spike: Redis caching" --body "Testing Redis integration. Will delete if not viable."

2. Feature Flags

For features that need to be toggled:

gh pr create --draft --title "Feature: Dark mode" --body "Implementing dark mode with feature flag. Ready for testing."

3. Breaking Changes

For major refactoring:

gh pr create --draft --title "BREAKING: API v2" --body "Major API changes. Need extensive testing before merge."

Key Takeaways

Draft PRs are perfect when you want to:

  • Work closer with your team
  • Share work-in-progress safely
  • Get early feedback without pressure
  • Enable CI/CD testing early
  • Prevent accidental merges
  • Improve team collaboration
  • Track progress on large features

Draft PRs are about collaboration. They’re a tool to make your development process more transparent and collaborative. It’s also much more fun to work together, and I feel that it makes the culture of only shipping “perfect code” less of a thing.

Happy coding!

Developer Productivity: Embracing Digital Minimalism

I read the book Digital Minimalism by Cal Newport.

This post is not a review of the book, but a summary of my takeaways that helped me improve my productivity.

While I’ve focused specifically on developer productivity insights, I actually learned a ton about my own behavior and how to improve my focus and presence outside of work (topics not covered in this post).

I included some quotes from the book that I felt was relevant. I highly recommend reading the book!

Creating your toolbox

  1. Be intentional about your tools, and find out how to optimize them.
  2. Don’t use unnecessary tools. Have scheduled time slots on migrating away from unnecessary platforms, apps and tools.
  3. Document your tools, what problem they solve and if there’s a better way of solving the particular problem.

Here’s my template for documenting which tools I use:

## Tool/App: ______________________

1. 🧭 Does this support something I **deeply value**?
   - ☐ Yes
   - ☐ No

2. 🥇 Is this the **best way** to support that value?
   - ☐ Yes
   - ☐ No
   - ☐ Unsure

3. 🪫 Do I use this **intentionally**, or **compulsively/by default**?
   - ☐ Intentional
   - ☐ Compulsive
   - ☐ Both

4. 🧠 What would my life feel like **without it for 30 days**?
   - _______________________

5. 🛠 If I removed it, what would I **replace it with**?
   - _______________________

6. 🧱 Could I keep this app but add **friction** or **limits**?
   - ☐ Yes → How? _______________________
   - ☐ No → Then consider removing.

→ Decision:
- ☐ Keep as-is
- ☐ Keep but limit
- ☐ Remove temporarily

Listing all your digital tools and having one note for each service/application is a lot of work. However it makes you intentional about your choice.

Digital minimalists believe that deciding a particular technology supports something they value is only the first step. To truly extract its full potential benefit, it’s necessary to think carefully about how they’ll use the technology.

- Cal Newport - Digital Minimalism

Working

  1. Schedule a time slot for a specific activity. Be intentional about time in front of your computer.
  2. Be relentless about focusing on this activity within the time slot. No distractions.
  3. Take silent walks alone. A silent walk is just a regular walk, without any silicon or lithium.

Time slots will make you productive. Silent walks will spawn ideas and boosts creativity, while giving your mind a break from inbound traffic.

Digital Minimalism: A philosophy of technology use in which you focus your online time on a small number of carefully selected and optimized activities that strongly support things you value, and then happily miss out on everything else.

- Cal Newport - Digital Minimalism

Notifications & Distractions

  1. Keep your phone in DND (Do not Disturb). Allow your emergency contacts through and if you have any mandatory apps.
  2. Read your email once a day. Email makes you context switch.
  3. Avoid context switching if possible. Use DND, busy light, inform colleagues and friends if your time slot.
  4. If using social media, don’t use the mobile app. Use a browser and be intentional about what to do on social media. Use a time slot.

I’m not a huge fan of this quote because I feel it simplifies the complexity of social media, but it has some truth to it for sure:

The tycoons of social media have to stop pretending that they’re friendly nerd gods building a better world and admit they’re just tobacco farmers in T-shirts selling an addictive product to children. Because, let’s face it, checking your “likes” is the new smoking.

- Bill Maher

Behavior

  1. Make your phone boring. Be intentional about the unlock/quick check behavior. Have time slots for answering or checking your phone.
  2. Less is more. When getting new hardware or software, be intentional about it’s meaning. Will it replace something? Does it align with my values and work style?
  3. Many tools and gadgets may make your life a tiny bit more convenient. Is it worth it, or is the purchase driven by emotions?
  4. Planning activities does not rob it of spontaneity and relaxation. The planning takes a very short time, and during the activity itself, spontaneous and creative moments will likely occur.
  5. Having “Nothing to do” is likely not relaxing. Without intentional use of technology, there’s a large risk that the nothingness will be filled with low-quality activities.

The sugar high of convenience is fleeting and the sting of missing out dulls rapidly, but the meaningful glow that comes from taking charge of what claims your time and attention is something that persists.

Digital minimalists recognize that cluttering their time and attention with too many devices, apps, and services creates an overall negative cost that can swamp the small benefits that each individual item provides in isolation

- Cal Newport - Digital Minimalism

To sum it up

Digital Mininalism is not about avoiding technology, but about being intentional about how you use it.

It’s about filtering out distractions and focusing on what’s important to you.

Translated to developer productivity, it’s about moving forward with your projects and not being distracted by the noise of the internet.

Good luck!

GitHub: A Practical Guide to Branches and Pull Requests

A Common “simple” workflow

I often see this sort of workflow:

git add .
git commit -m "asd"
git push

Many System Administrators and DevOps engineers start with this simple approach.

Being the sole user of version control they feeli satisfied with just having their code versioned.

The problem arises the day another team member joins or (what’s more likely) you join a larger team. This is important because larger teams require more structure and coordination to work effectively with version control, and will most likely require this from you before you get to work.

GitHub Pull Requests rules

When it comes to GitHub; you could argue that the Pull Request is GitHub’s greatest feature (maybe GitHub actions are the second greatest feature hehe).

Let’s look at a simple workflow that utilizes this feature.

It’s designed to promote collaboration and has features that promotes DevOps:

  • Knowledge Sharing
  • Code Review
  • Testing
  • Continuous Integration
  • Documentation
  • Team Collaboration
  • Audit Trail

And much much more!

The workflow (git kata)

Git provides a structured way to manage changes, review them, and roll back if needed using git branches.

A Team Workflow

Use this in a team environment where you have access to the repository.

  1. Update your local main:

    git checkout main
    git pull origin main
  2. Create and switch to new branch:

    git checkout -b feature-branch
  3. Make your changes and commit:

    git add .
    git commit -m 'Your commit message'
  4. Push to remote:

    git push -u origin feature-branch
  5. Create PR on GitHub from your branch to main

  6. After PR is merged, cleanup:

    git checkout main
    git pull origin main
    git branch -d feature-branch

Think of branches like snapshots of your system. Each branch is a safe place to make changes without affecting the main system.

Open Source Forking Workflow

When contributing to open source projects, you won’t have direct access to the main repository.

This is where forking comes in - it’s like creating your own copy of the project that you can modify freely.

Think of it as getting your own sandbox to play in, while still being able to share your changes with the original project.

  1. Fork the repository on GitHub

    • Click the “Fork” button in the top-right corner
    • This creates your own copy of the repository under your GitHub account
  2. Clone your fork to your local machine:

    git clone https://github.com/your-username/repo.git
  3. Add the original repository as upstream:

    git remote add upstream https://github.com/original-owner/repo.git
  4. Keep your fork in sync

    This is crucial - you need to make sure your local copy, your fork, and the original repository are all on the same page.

    Always make sure your fork is up to date before starting new work. You’ll minimize merge conflicts this way. This should be something you think about every time you open your IDE.

    git checkout main
    git pull origin main    # Get changes from your fork
    git pull upstream main  # Get changes from original repo
  5. Create your feature branch:

    git checkout -b feature-branch
  6. Make your changes and push to your fork:

    git add .
    git commit -m 'Your changes'
    git push -u origin feature-branch
  7. Create a Pull Request on GitHub

    • Go to your fork on GitHub
    • Click “Compare & pull request”
    • Select your feature branch to merge into the original repository’s main branch
  8. After your PR is merged, clean up:

    git checkout main
    git push -d origin feature-branch  # Delete remote branch
    git branch -d feature-branch       # Delete local branch
    git remote prune origin            # Clean up stale references

When to Use Each Workflow

  • Team Workflow: Use when you have direct access to the repository and are working with a team
  • Forking Workflow: Use when:
    • Contributing to open source projects
    • You don’t have write access to the main repository
    • You want to experiment with changes without affecting the main repository
    • You want to maintain your own version of a project

What do you mean Kata?

I first heard the term Kata from Michael Lombardi in his Getting GitHub repo (which I highly recommend you check out).

The idea being a small daily exercise to help you get used to the workflow and build memory through repetition.

In this case, the workflow I’ve described above can be used as a daily practice:

  1. Create a new branch
  2. Make a small change
  3. Create a PR
  4. Get it reviewed and merged, or removed by yourself if it’s not needed and is only for learning purposes etc
  5. Clean up

This is a safe, simple and powerful way to get used to the workflow and build memory through repetition, which is one of my favorite ways to learn something new!

But it’s annoying to click Merge after every git commit!

Gotcha, luckily there are some tools that can help you with this.

GitHub CLI

GitHub CLI is a command line tool that allows you to interact with GitHub from the terminal.

Learning and using the mentioned workflow, the GitHub CLI and utilizing auto-merge can automate this. It wont be as simple as pusing directly to main, but it’ll be a good compromise and sooner or later you’ll have to learn it.

Key takeaways

Use this workflow if:

  • If your repo contains code that you value
  • You want to collaborate with others
  • You want to learn git and github
  • You want to be a better developer
  • If you’re using git + github at work

Do:

  • Practice the workflow daily
  • Create a tool to help you remember the sequence of commands at first (I created my own python script)

Have fun and happy learning!

Open Source: Launch Flatpak Apps Faster with This One-Liner

I’m using sway on arch (btw) and naturally I need a terminal GUI app launcher. I was quite surprised how well it works, prerequirements are fzf and flatpak:

flatpak run (flatpak list --app --columns=application | fzf) > /dev/null 2>&1 &

How It Works

  • flatpak list --app --columns=application
    Lists all installed Flatpak applications, showing only their application IDs.
  • fzf
    Lets you interactively search and select an app from the list.
  • flatpak run (...)
    Runs the selected app.
  • > /dev/null 2>&1 &
    Hides any output and runs the app in the background.

Flatpak CLI Details

  • --app
    Filters the list to only show applications (not runtimes or extensions).
  • --columns=application
    Shows only the application ID, making the output clean and easy to use with scripts.

Bash Function for Easy Use

Add this to your ~/.bashrc or ~/.bash_profile to use it as runfp:

runfp() {
  flatpak run $(flatpak list --app --columns=application | fzf) > /dev/null 2>&1 &
}

Lately I started using fish-shell, so my function looks like this:

function runfp
  flatpak run (flatpak list --app --columns=application | fzf) > /dev/null 2>&1 &
end

I love the way fish-shell has a default workflow for creating functions, it’s so easy to use and maintain.

function # hit enter
    #write your function
end

funcsave function-name

# git commit and push to github

Version Control Your Shell Configurations

As mentioned, I version control my custom shell functions.

You can version control your ~/.config/fish/functions folder (for fish) or your ~/.bashrc (for bash) using Git and GitHub, it’s possible with bash and zsh too, but you have to set it up yourself if you want the same workflow that is.

When I use bash I usually version control my ~/.bashrc as a github gist, but I’m not a fan of it, because it’s not as modular as self-contained functions in a repo.

That said, a gist is great because you can use the github CLI to edit or create a gist from the terminal!

# create a gist
gh gist create --source=. --remote=origin --push

# list gists
gh gist list

# edit a gist
gh gist edit <gist-id> --add ~/.config/fish/functions/mycoolfunc.fish

Too easy!

How to do it with GitHub CLI:

  • Initialize a repo and push to GitHub:
    cd ~/.config/fish
    git init
    git add functions/
    git commit -m "Track my fish functions"
    gh repo create --public --source=. --remote=origin --push
    # Or for bash:
    cd ~
    git init
    git add .bashrc
    git commit -m "Track my bashrc"
    gh repo create --public --source=. --remote=origin --push

Benefits:

  • Never lose your dear functions!
  • Sync your shell setup across multiple machines
  • Share your functions with others
  • Fun!

Happy function-writing!

GitOps: ITSM tools are not DevOps tools

A bit of a high level design rant, so pardon the fluff.

Problem: ITSM Tools Often Block Iteration Speed

A typical AWS Landing Zone workflow:

  • User submit requests through ServiceNow for account access

  • The Catalog Item (ServiceNow request form) is not that great and tricky to use

  • Approvals may take days

  • Once approved, the DevOps pipeline is triggered, but because of hardship with creating sensible API calls between an ITSM system and a DevOps tool, it’s harder to sanitize the data, making the run more error prone

  • The user is presented with 200 lines of different logs after days of waiting

Additional problems with above workflow

  • DevOps teams struggle with improving the workflow due to a overworked ITSM team faced with compliance and audit requirements

  • The actual provisioning happens outside ITSM and the response messages are not that great (formating is hard to do right here)

  • The DevOps engineer tasked with this work is usually not stoked about doing it

The core issue: ITSM tools are good at simple CRUD operations, but most of them are not DevOps tools.

Solution: Onboard the user to GitHub

Use ITSM tools for what they’re good at (access requests and compliance) while letting GitHub handle the DevOps pipeline.

Phase 1: ITSM tool Manages Repository Access

Developers request access to the infrastructure provisioning repository through standard ITSM processes:

  1. ServiceNow Request: “Access to aws-account-factory GitHub repository”
  2. Justification: “Need development environment for ML project”
  3. Approval Chain: Manager → Security → Infrastructure Team

Once approved, developers receive:

  • GitHub repository access
  • Documentation on account request process
  • YAML templates for their specific use case

Phase 2: GitHub Handles Technical Implementation

With repository access granted, the user creates a PR with an edited .yaml template, and the feedback loop can begin (Dev + Ops).

# accounts/engineering/ml-project-dev.yaml
account_name: "ml-project-development"
environment: "development"
cost_center: "engineering"
team_lead: "sarah@company.com"
compliance_level: "standard"

Phase 3: GitHub PR Drives the Workflow

The pull request becomes the technical collaboration space:

# .github/CODEOWNERS
accounts/engineering/* @infrastructure-team @security-team
accounts/production/* @infrastructure-team @security-team @compliance-team

This step will catch many bugs and help the complete engineering organization to be more efficient.

Phase 4: Automated Integration Back to ServiceNow

GitHub Actions provisions infrastructure and updates ServiceNow with the Configuration Item (CI):

# .github/workflows/provision-account.yml
name: Provision AWS Account
on:
  push:
    paths: ['accounts/**/*.yaml']
    
jobs:
  provision:
    runs-on: ubuntu-latest
    steps:
      - name: Terraform Apply
        run: terraform apply -auto-approve
      - name: Update ServiceNow CI
        run: |
          curl -X POST "$SERVICENOW_API/cmdb_ci_aws_account" \
            -H "Authorization: Bearer $SERVICENOW_TOKEN" \
            -d '{
              "account_id": "${{ vars.AWS_ACCOUNT_ID }}",
              "environment": "${{ vars.ENVIRONMENT }}",
              "cost_center": "${{ vars.COST_CENTER }}",
              "provisioned_date": "${{ vars.CURRENT_DATE }}"
            }'          

ITSM compliance: ServiceNow maintains complete configuration item (CI) records and audit trails while technical teams work in their preferred tools.

Technical Implementation

Use a proper Infrastructure as Code (IaC) tool to provision the infrastructure, great examples are Terraform or Pulumi.

Repository Structure

aws-account-factory/
├── docs/
│   ├── getting-started.md
│   └── templates/
├── accounts/
│   ├── engineering/
│   ├── security/
│   └── production/
├── terraform/
│   ├── modules/
│   └── environments/
├── scripts/
│   └── copy-template.sh/
└── .github/
    ├── workflows/
    └── CODEOWNERS

API Integration

GitHub Actions updates ServiceNow automatically:

- name: Update ServiceNow CMDB
  run: |
    curl -X POST "$SERVICENOW_API/cmdb_ci_aws_account" \
      -d '{
        "account_id": "${{ vars.AWS_ACCOUNT_ID }}",
        "environment": "${{ vars.ENVIRONMENT }}",
        "cost_center": "${{ vars.COST_CENTER }}"
      }'    

Why This Works Better

Speed Improvements

  • PR feedback is immediate, not dependent on ITSM ticket updates
  • GitHub Actions runs in parallel, not sequential ITSM workflow steps
  • Developers can iterate on configurations without going back through forms

Better Collaboration

  • Infrastructure teams review actual YAML configurations
  • Security teams can suggest specific code changes
  • All discussions happen with full technical context

Discussion

ITSM tools and DevOps tools solve different problems. ServiceNow is good at managing access and compliance workflows. GitHub is industry leading at technical collaboration and automation.

This approach uses both tools for what they do well. The API integration keeps ServiceNow updated while letting technical teams work efficiently.

Have fun, now you only have about 25,000 lines of code to write!

Happy building!

Git Clone vs Fork - What's the Difference?

This is something I’ve sort of understood but never quite got around to stamp out the differences, so I felt like sharing it!

Lets look at the key differences between Git clone and Git(Hubs) fork operations, and when to use which one.

Git Clone

Cloning creates a local copy of a repository. When running git clone, you get:

  • The entire .git directory
  • All branches and history
  • A working copy of the files
  • Remote tracking to the original repo (origin)

Example:

git clone https://github.com/user/repo
cd repo
git remote -v # Shows origin pointing to source

GitHub Fork

Forking is a GitHub feature (not Git) that creates your own copy of a repo on GitHub. Key points:

  • Lives on GitHub under your account
  • Independent from the original repo
  • Enables pull request workflow
  • Can sync with original (upstream) repo

Typical fork workflow:

# 1. Fork via GitHub UI
# 2. Clone your fork
git clone https://github.com/YOUR-USERNAME/repo
# 3. Add upstream remote
git remote add upstream https://github.com/ORIGINAL-OWNER/repo
# 4. Create branch and work
git checkout -b feature-branch

When to Clone

Use clone when:

  • You have write access to the repo OR
  • You just need a local copy to work with
  • You’re doing internal development
  • You don’t plan to contribute back but just want to run the code locally

When to Fork

Fork when:

  • Contributing to open source projects
  • You need your own version of a project
  • You don’t have write access to original repo
  • You want to propose changes via pull requests

Keeping Forks Updated

To sync your fork with upstream:

git fetch upstream
git checkout main
git merge upstream/main
git push origin main

Key Differences Between Forking and Cloning

Here’s a breakdown of the technical and practical differences:

Aspect Forking Cloning
Scope Creates a copy of the repository on GitHub under your account. Creates a local copy of a repository on your machine.
Location Server-side (on GitHub). Local (on your computer).
Ownership You own the fork and have full control over it. You don’t own the repository; you just have a local copy.
Collaboration Designed for contributing to projects via pull requests. Primarily for local development or direct pushes (if you have access).
Upstream Relationship Forked repo is independent but can sync with the original via remotes. Cloned repo is tied to the original remote (origin) unless reconfigured.
Use Case Ideal for contributing to open-source projects or maintaining a derivative. Ideal for local development, testing, or private work.
Git Command Not a Git command; it’s a GitHub feature. More on this below on using the GitHub CLI! git clone <url> is a native Git command.

Using GitHub CLI

The GitHub CLI (gh) makes working with forks and clones even easier:

# Fork and clone in one command
gh repo fork user/repo --clone=true

# Just fork (no clone)
gh repo fork user/repo

# Clone your existing fork
gh repo clone YOUR-USERNAME/repo

# Create PR from your fork
gh pr create --base main --head YOUR-USERNAME:feature-branch

Pro tip: gh repo fork automatically sets up the upstream remote for you, saving the manual git remote add upstream step.

Installing the GitHub CLI

# macos
brew install gh
# windows
winget install -e --id GitHub.cli -s winget
# linux - thanks -> https://dev.to/raulpenate/begginers-guide-installing-and-using-github-cli-30ka

# Arch
sudo pacman -S github-cli

# Debian, Ubuntu Linux, Raspberry Pi OS (apt)
(type -p wget >/dev/null || (sudo apt update && sudo apt-get install wget -y)) \
&& sudo mkdir -p -m 755 /etc/apt/keyrings \
&& wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
&& sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
# Upgrade
sudo apt update
sudo apt install gh

# Fedora, CentOS, Red Hat Enterprise Linux (dnf)
sudo dnf install 'dnf-command(config-manager)'
sudo dnf config-manager --add-repo https://cli.github.com/packages/rpm/gh-cli.repo
sudo dnf install gh --repo gh-cli
#Alternatively, install from the community repository:
sudo dnf install gh
#Upgrade
sudo dnf update gh

# openSUSE/SUSE Linux (zypper)
sudo zypper addrepo https://cli.github.com/packages/rpm/gh-cli.repo
sudo zypper ref
sudo zypper install gh
# Upgrade
sudo zypper ref
sudo zypper update gh

Common Fork Workflow

Here’s a typical workflow I use:

# Fork and clone
gh repo fork original-owner/repo --clone=true

# Create feature branch
git checkout -b my-feature

# Make changes, then commit
git add .
git commit -m "feat: add awesome feature"

# Push to fork
git push -u origin my-feature

# Create PR using GitHub CLI
gh pr create

Closing Thoughts

Both forking and cloning use Git’s object model (blobs, trees, commits), but:

  • Clone: Local copy of Git objects
  • Fork: Server-side copy with independent Git refs

The main difference is where the copy lives and how you can interact with the original repo.

Also, when cloning a repo, be aware of how you’ve authenticated. If you clone using SSH, you’ll need to add the SSH key to your GitHub account.

If you clone using HTTPS, you’ll need to enter your GitHub username and password.

I highly recommend using SSH or the GitHub CLI to clone repos - it’s much easier once you’ve set it up.

Happy Cloning and Forking!

Python: Virtual Environments

Python Virtual Environments (venv) is a python module that limits dependency and version conflict by:

  • Isolating the python environment
  • Separating dependencies on a project basis

Usage

# Create a venv:
python3 -m venv .venv

# Active venv:
source env/bin/activate

# On windows
.\env\Scripts\activate

# deactivate
deactivate

Workflow

  1. Create a venv using the mod (-m) argument or via IDE (VSCode -> ctrl + shift + p -> Python: Create env)
  2. Activate it using your OS specific way
  3. Add .venv (venv name in the example above) to .gitignore
  4. Develop python code, install packages: pip install <package>
  5. Once done, freeze requirements: pip freeze > requirements.txt
  6. Recreate exact environment on another host: pip install -r requirements.txt

Mucho Importante

  1. Always activate the venv when working on the project
  2. .gitignore the venv name, the users will build it locally
  3. Use pip freeze after installing new dependencies to update the requirements.txt

Always look on the bright side of isolation ✅

Happy coding

PowerShell: Restore a DNS zone in Active Directory

Beware of the copy-paste trap! Always test public code in a safe, isolated environment before running it in production.

The fast version

Did someone just remove a very important AD-integrated DNS forward lookup zone for you?

Hang tight, and i’ll show you how to get it back.

  1. Using Domain Admin access rights, have any type of elevated PowerShell session open with the DNSServer and activedirectory module imported
  2. Open notepad and save the script below as “Restore-ADDNSZone.ps1” at any location
  3. .\Restore-ADDNSZone.ps1 -ZoneName ‘myzone.org’
  4. If the zone was just deleted and the DC has access to the deleted zone objects, your zone will be restored. Verify by looking in DNS management.

If you’re not in a hurry, I recommend that you read what the script does first and test it in lab.

The output should look similar to this

DNS Zone restore the simple way

I wrote a simple script to demonstrate how a DNS zone restore can be achived using the Restore-ADObject cmdlet:

  • Importing Required Modules: Loads ActiveDirectory and DnsServer modules.
  • Setting Parameters: Allows specifying a DNS zone name, defaulting to “ehmiizblog”.
  • Searching for Deleted Zone: Looks for the deleted DNS zone in known AD locations.
  • Retrieving Deleted Records: Fetches resource records for the deleted zone.
  • Restoring Zone & Records: Restores the DNS zone and its records to their original names.
  • Restarting DNS Service: Restarts the DNS service to apply changes.
  • Output Messages: Provides feedback on the restoration progress and completion.
#Requires -Version 5.0 -Modules DnsServer, ActiveDirectory
param(
[string]$ZoneName = "ehmiizblog"
)
<#
.Synopsis
Restores a DNS zone using the DNSServer & ActiveDirectory module
.DESCRIPTION
An AD-integrated DNS primary zone can be quickly restored, with
all it's records using this script. The script looks in known
locations for the deleted zone and it's resource records (`dnsZone`
& `dnsNode` objects). The restored zone is also renamed to it's
original name.
.EXAMPLE
.\Restore-ADDNSZone.ps1 -ZoneName 'myzone.org'
.NOTE
Run this in a lab setting before you try it in prod
#>
Import-Module ActiveDirectory, DnsServer -ErrorAction Stop
$DomainDN = (Get-ADDomain).DistinguishedName
[System.Collections.ArrayList]$global:DeletedZoneDN = @(
"CN=MicrosoftDNS,CN=System,$DomainDN"
"DC=DomainDnsZones,$DomainDN"
)
function Get-DeletedDNSZone {
param(
[string]$ZoneName = $ZoneName
)
$DeletedZoneDN | ForEach-Object {
# Define the lookup parameters
$FindZoneSplat = @{
LDAPFilter = "(&(name=*..Deleted-$($ZoneName)*)(ObjectClass=dnsZone))"
SearchBase = $_
IncludeDeletedObjects = $true
Properties = "*"
}
# Look for the zone
$LookForTheZone = Get-ADObject @FindZoneSplat
if (-not [System.String]::IsNullOrEmpty($LookforTheZone)) {
$LookForTheZone | Select-Object -First 1
}
}
}
function Get-DeletedDNSZonesResourceRecords {
# Get the deleted DNS zone
$DeletedZone = Get-DeletedDNSZone
if ([string]::IsNullOrEmpty($DeletedZone)) {
Write-Warning -Message "Zone: $ZoneName not found."
Break
}
# Convert WhenChanged to UTC and format to LDAP Generalized Time
$DeletionTimeStamp = $DeletedZone.whenChanged
# Iterate over each DeletedZoneDN
$DeletedZoneDN | ForEach-Object {
Get-ADObject -Filter { WhenChanged -ge $DeletionTimeStamp -and ObjectClass -eq 'dnsNode' -and isDeleted -eq $true } -SearchBase $_ -IncludeDeletedObjects
}
}
$TheDeletedZone = Get-DeletedDNSZone
$TheDeletedRecords = Get-DeletedDNSZonesResourceRecords
if ($TheDeletedZone -and $TheDeletedRecords) {
Write-Output "Starting the zone restore.."
$TheDeletedZone | Restore-ADObject -NewName $ZoneName -Verbose -ErrorAction Stop
$TheDeletedRecords | Restore-ADObject -Verbose -ErrorAction Stop
Restart-Service DNS -Verbose -ErrorAction Stop
Write-Output "Zone restore completed."
}

Didn’t work, what now

If you have access to a backup of the DNS server, you can export a .dns file and rebuild the zone on the production server.

The steps below will vary largely on your situation, but it might give you an idea of the process:

Sidenote:Tthe “Above explained” points adds further explenation to the command we ran in the previous step.

  1. Connecto to the backup DC
  2. Export the zone using dnscmd: dnscmd /ZoneExport zone.org zone.org_backup.dns
  3. Attached a disk or storage device to the DC, mount it and moved the newly created zone data file zone.org_backup.dns
  4. Attached the disk to the PDC
  5. Copied the file to system32\dns
  6. Create the new zone using dnscmd:
    • dnscmd SERVER /zoneadd zone.org /primary /file zone.org_backup.dns
    • Above explained: Adds a zone to the DNS server.
    • dnscmd SERVER /zonereload zone.org
    • Above explained: Copies zone information from its source.
  • This creates a non AD integrated DNS zone with resource records from the export
  1. Convert the zone from non-ad integrated into the AD integrated
    1. dnscmd SERVER /zoneresettype zone.org /dsprimary
    2. Above explained: Creates an active directory integrated zone.

References:

Happy restoring

Linux on GU605MI: Sound, Keyboard, Brightness & asusctl

Disclaimer: Please note that while these steps have been provided to assist you, I cannot guarantee that they will work flawlessly in every scenario. Always proceed with caution and make sure to back up your data before making any significant changes to your system.

Written on 2024-05-08 (Note: Information may become outdated soon, and this was just my approach)

If you’re a proud owner of the 2024 Asus Rog Zephyrus G16 (GU605MI) and running Fedora 40+, ensuring smooth functionality of essential features like sound, keyboard, screen brightness, and asusctl might require a bit (hehe) of tweaking.

Here’s a comprehensive guide, or really the steps I took, to get everything up and running.

Ensure Kernel Compatibility

First things first, ensure that your kernel version is at least 6.9.*. If you’re on a newer kernel, skip this step.

Kernel 6.9 has audio improvements for Intels new 14th gen CPUs, so it’s mandatory for the GU605 to have it.

You might want to research on how to perform this in a safer way.

I trust in Fedora and the Copr build system, so I just executed the following:

sudo dnf copr enable @kernel-vanilla/mainline
sudo dnf update -y
# Wait for transactions to complete (may take 5+ minutes)
systemctl reboot

Follow the Fedora Workstation Guide

Refer to the Fedora Workstation guide provided by Asus: Fedora Guide. The steps I took myself where the following:

# Updates the system
sudo dnf update -y
sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

# Installs the nvidia driver
sudo dnf update -y
sudo dnf install kernel-devel
sudo dnf install akmod-nvidia xorg-x11-drv-nvidia-cuda

# Enable hibrenate
sudo systemctl enable nvidia-hibernate.service nvidia-suspend.service nvidia-resume.service nvidia-powerd.service

# Install asusctl and superfxctl, used to interact with the system
# Installs Rog Control gui (to interact with the command line interfaces graphically)
sudo dnf copr enable lukenukem/asus-linux
sudo dnf update

sudo dnf install asusctl supergfxctl
sudo dnf update --refresh
sudo systemctl enable supergfxd.service

sudo dnf install asusctl-rog-gui

Install Firmware, needed as of 2024-05-08

In the future the firmware might be added into the linux-kernel, if the sound works great after you’ve updated the system, skip this step.

The sound will not work without the correct firmware, we can clone down the correct firmware and copy it over to our system using the following lines:

git clone https://gitlab.com/kernel-firmware/linux-firmware.git
cd linux-firmware
sudo dnf install rdfind
make install DESTDIR=installdir
sudo cp -r installdir/lib/firmware/cirrus /lib/firmware
systemctl reboot

Fix Screen Brightness

The screens brightness works out of the box while on the dGPU.

However that comes with certain drawbacks, like flickering electron applications and increase in power consumption. The steps below gets the screen brightness controls to work in “Hybrid” and “Integrated” mode (while the display is being ran by the iGPU).

Open the grub configuration file:

sudo nano /etc/default/grub

Add the following string at the end of the line GRUB_CMD_LINE_LINUX=:

quiet splash nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0

After editing, the line should look like this:

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.driver.blacklist=nouveau modprobe.blacklist=nouveau rhgb quiet rd.driver.blacklist=nouveau modprobe.blacklist=nouveau acpi_backlight=native quiet splash nvidia-drm.modeset=1 i915.enable_dpcd_backlight=1 nvidia.NVreg_EnableBacklightHandler=0 nvidia.NVreg_RegistryDwords=EnableBrightnessControl=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true

Update the grub configuration and reboot:

sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
systemctl reboot

With these steps, I was able get a somewhat functional GU605MI Fedora system. If you encounter any issues, refer to the respective documentation or seek further assistance from the Asus-Linux community.

Happy computing!

PowerShell Guide: Script as a Windows Service

Red or blue pill

If you are in the same rabbit-hole as I was of setting up a Windows Service of any form of looping script, there’s two pills you can choose from:

  1. Red Pill: Create a program that abide to the law of the fearsome Service Control Manager.

  2. Blue Pill: Write a PowerShell script, 8 lines of XML, and download WinSW.exe

WinSW describes itself as following:

A wrapper executable that can run any executable as a Windows service, in a permissive license.

Naturally as someone who enjoys coding with hand grenades, I took the Blue Pill and here’s how that story went:

The Blue Pill

  1. Create a new working directory and save it to a variable
$DirParams = @{
    ItemType    = 'Directory'
    Name        = "PowerShell_Service"
    OutVariable = 'WorkingDirectory'
}
New-Item @DirParams
  1. Download the latest WinSW-x64.exe to the working directory
# Get the latest WinSW 64-bit executable browser download url
$ExecutableName = 'WinSW-x64.exe'
$LatestURL = Invoke-RestMethod 'https://api.github.com/repos/winsw/winsw/releases/latest'
$LatestDownloadURL = ($LatestURL.assets | Where-Object {$_.Name -eq $ExecutableName}).browser_download_url
$FinalPath = "$($WorkingDirectory.FullName)\$ExecutableName"

# Download it to the newly created working directory
Invoke-WebRequest -Uri $LatestDownloadURL -Outfile $FinalPath -Verbose
  1. Create the PowerShell script which the service runs

This loop checks for notepad every 5 sec and kills it if it finds it

while ($true) {
    $notepad = Get-Process notepad -ErrorAction SilentlyContinue
    if ($notepad) {
        $notepad.Kill()
    }
    Start-Sleep -Seconds 5
}
  1. Construct the .XML file

Just edit the id, name, description and startarguments

<service>
  <id>PowerShellService</id>
  <name>PowerShellService</name>
  <description>This service runs a custom PowerShell script.</description>
  <executable>C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe</executable>
  <startarguments>-NoLogo -file C:\Path\To\Script\Invoke-PowerShellServiceScript.ps1</startarguments>
  <log mode="roll"></log>
</service>

Save the .xml, in this example I saved it as PowerShell_Service.xml

# if not already, step into the workingdirectory
cd $WorkingDirectory.FullName

# Install the service
.\WinSW-x64.exe install .\PowerShell_Service.xml

# Make sure powershell.exe's executionpolicy is Bypass
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine

# As an administrator
Get-Service PowerShellService | Start-Service

Conclusion

Running a PowerShell script as a service on any windows machine isn’t that complicated thanks to WinSW. It’s a great choice if you don’t want to get deeper into the process of developing windows services (it’s kind of a fun rabbit-hole though).

I recommend reading docs of WinSW.

Some things to consider:

  • The service will run PowerShell 5.1 as System
  • Meaning the executionpolicy must be supporting that usecase (bypass as local machine will do)
  • The script in this example is just a demo of a loop, but anything you can think of that loops will do here
  • Starting the Service requires elevated rights in this example
  • If you get the notorious The service did not respond to the start or control request in a timely fashion, you have my condolences (This is a very general error msg that has no clear answer by itself it seems)

Good luck have fun, happy coding

/Emil