96 stories
·
0 followers

Dumping packets from anywhere in the networking stack | Red Hat Developer

1 Share

Dumping traffic on a network interface is one of the most performed steps while debugging networking and connectivity issues. On Linux, tcpdump is probably the most common way to do this, but some use Wireshark too.

Where does tcpdump get the packets from?

Internally, both tcpdump and Wireshark use the Packet Capture (pcap) library. When capturing packets, a socket with the PF_PACKET domain is created (see man packet) which allows you to receive and send packets at the layer 2 from the OSI model.

From libpcap:

sock_fd = is_any_device ?
       socket(PF_PACKET, SOCK_DGRAM, 0) :
       socket(PF_PACKET, SOCK_RAW, 0);

Note that the last parameter in the socket call is later set to a specific protocol, or ETH_P_ALL if none is explicitly provided. The latter makes all packets to be received by the socket.

This allows to get packets directly after the device driver in ingress, without any change being made to the packet, and right before entering the device driver on egress. Or to say it differently packets are seen between the networking stack and the NIC drivers.

Limitations

While the above use of PF_PACKET works nicely, it also comes with limitations. As packets are retrieved from a very specific and defined place of the networking stack, they can only be seen in the state they were at that point, e.g., on ingress packets are seen before being processed by the firewall or qdiscs, and the opposite is true on egress.

Offline analysis

By default, tcpdump and Wireshark process packets live at runtime. But they can also store the captured packets data to a file for later analysis (-w option for tcpdump). The pcap file format (application/vnd.tcpdump.pcap) is used. Both tools (and others, e.g., tshark), support reading pcap formatted files.

How to capture packets from other places?

Retrieving packets from other places of the networking stack using tcpdump or Wireshark is not possible. However, other initiatives emerged and targeted monitoring traffic within a single host, like Retis (documentation).

Retis is a recently released tool aiming at improving visibility into the Linux networking stack and various control and data paths. It allows capturing networking-related events and providing relevant context using eBPF, with one notable feature being capturing packets on any (packet-aware—AKA socket buffer) kernel function and tracepoint.

To capture packets from the net:netif_receive_skb tracepoint:

$ retis collect -c skb -p net:netif_receive_skb
4 probe(s) loaded
4581128037918 (8) [irq/188-iwlwifi] 1264 [tp] net:netif_receive_skb
 if 4 (wlp82s0) 2606:4700:4700::1111.53 > [redacted].34952 ttl 54 label 0x66967 len 79 proto UDP (17) len 71

Note that Retis can capture packets from multiple functions and tracepoints by using the above -p option multiple times. It can even identify packets and reconstruct their flow! To get a list of compatible functions and tracepoints, use retis inspect -p.

Also it should be noted that by default tcpdump and Wireshark put devices on promiscuous mode when dumping packets from a specific interface. This is not the case with Retis. An interface can be set in this mode manually by using ip link set <interface> promisc on.

In addition to the above, another tool provides a way to capture packets and convert them to a pcap file: bpftrace. It is a wonderful tool but is more low-level and requires to you write the probe definitions by hand and for compilation of the BPF program to take place on the target. Here the skboutput function can be used, as shown in the help.

Making the link

That's nice, but while Retis is a powerful tool when used standalone, we might want to use the existing tcpdump and Wireshark tools but with packets captured from other places of the networking stack.

This can be done by using the Retis pcap post-processing command. This works in two steps: first Retis can capture and store packets, and then post-process them. The pcap sub-command allows converting Retis saved packets to a pcap format. This can then be used to feed existing pcap-aware tools, such as tcpdump and Wireshark:

$ retis collect -c skb -p net:netif_receive_skb -p net:net_dev_start_xmit -o
$ retis print
4581115688645 (9) [isc-net-0000] 12796/12797 [tp] net:net_dev_start_xmit
 if 4 (wlp82s0) [redacted].34952 > 2606:4700:4700::1111.53 ttl 64 label 0x79c62 len 59 proto UDP (17) len 51
4581128037918 (8) [irq/188-iwlwifi] 1264 [tp] net:netif_receive_skb
 if 4 (wlp82s0) 2606:4700:4700::1111.53 > [redacted].34952 ttl 54 label 0x66967 len 79 proto UDP (17) len 71

$ retis pcap --probe net:net_dev_start_xmit | tcpdump -nnr -
01:31:55.688645 IP6 [redacted].34952 > 2606:4700:4700::1111.53: 28074+ [1au] A? <a href="http://redhat.com" rel="nofollow">redhat.com</a>. (51)

$ retis pcap --probe net:netif_receive_skb -o retis.pcap
$ wireshark retis.pcap

As seen above, Retis can collect packets from multiple probes during the same session. All packets seen on a given probe can then be filtered and converted to the pcap format.

When generating pcap files, Retis adds a comment in every packet with a description of the probe the packet was retrieved on:

$ capinfos -p retis.pcap
File name:           retis.pcap
Packet 1 Comment:    probe=raw_tracepoint:net:netif_receive_skb

In many cases, tools like tcpdump and Wireshark are sufficient. But, due to their design, they can only dump packets from a very specific place of the networking stack, which in some cases can be limiting. When that's the case it's possible to use more recent tools like Retis, either standalone or in combination with the beloved pcap aware utilities to allow using familiar tools or easily integrate this into existing scripts.

Read the whole story
bernhardbock
8 hours ago
reply
Share this story
Delete

Red Hat OpenStack Services on OpenShift: Rethinking storage design in pod-based architectures

1 Share

With the release of Red Hat OpenStack Services on OpenShift, there is a major change in the design and architecture that impacts how OpenStack is deployed and managed. The OpenStack control plane has moved from traditional standalone containers on Red Hat Enterprise Linux (RHEL) to an advanced pod-based Kubernetes managed architecture.

Introducing Red Hat OpenStack Services on OpenShift

In this new form factor, the OpenStack control services such as keystone, nova, glance and neutron that were once deployed as standalone containers on top of bare metal or virtual machines (VMs) are now deployed as native Red Hat OpenShift pods leveraging the flexibility, placement, abstraction and scalability of Kubernetes orchestration

The OpenStack compute nodes that are running VMs are still relying on RHEL, with the difference being that it is provisioned by Metal3 and configured by an OpenShift operator using Red Hat Ansible Automation Platform behind the scenes. It is worth noting that it’s still possible to bring preprovisioned nodes with RHEL pre-install.

New approach, new storage considerations

Deploying and managing the OpenStack control plane on top of OpenShift brings several new advantages, but it also comes with new storage considerations.

Previously, the OpenStack control plane was deployed as three “controllers” which usually took form as bare metal servers or, in some cases, VMs.

In terms of storage, the OpenStack control services used the server’s local disk(s) to write persistent data (or a network storage backend when booting from your storage area network (SAN)).

With the shift to a native OpenShift approach, the OpenStack control services are dynamically scheduled across OpenShift workers as pods. This approach introduces a number of benefits, but the default pod storage option is to use ephemeral storage. Ephemeral storage is perfectly fine for stateless services such as the service’s API, but not appropriate for services that require persistent data such as the control plane database. When a pod restarts or terminates, it must get its data back.

Fortunately, OpenShift provides a persistent storage abstraction layer in the form of “Persistent Volumes” (PV) and “Persistent Volume Claim” (PVC) that enable pods to mount volumes that persist across pod’s lifecycle. This persistent storage framework is tightly coupled with another standard called Container Storage Interface (CSI) that allows OpenShift to provision volumes from a variety of storage backends should the storage vendor provide a certified CSI Driver.

Red Hat OpenStack Services on OpenShift, high level design


This is where the paradigm changes, in previous versions of Red Hat OpenStack, the control services' persistent data were stored on local controllers disks and no further design decisions were needed besides the size, type, performance and RAID level of the disks.

With OpenStack Services on OpenShift, a storage solution must also be considered for OpenShift alongside the traditional OpenStack storage.

In this article, we dive into the main available options to back OpenShift and OpenStack data for environments that are using Ceph or third-party storage solutions.

Before we get into the details, you may wonder which OpenStack control services need persistent storage:

  • Glance for the staging area and optional cache
  • Galera for storing the database
  • OVN Northbound and Southbound database
  • RabbitMQ for storing the queues
  • Swift for storing object data when not using external physical nodes
  • Telemetry for storing metrics

Red Hat OpenStack Services on OpenShift with Red Hat Ceph Storage

Ceph is a well known and widely used storage backend for OpenStack. It can serve block with Nova, Glance and Cinder, file with Manila, and object with S3/SWIFT APIs.

The integration between OpenStack Services on OpenShift and Ceph is the same as previous OpenStack versions—block is served by RADOS block devices (RBD), file by CephFS or network file system (NFS) and object by S3 or SWIFT.

The different OpenStack services are configured to connect to the Ceph cluster, but what changes is the way you configure it at install time, as we are now using native Kubernetes Custom Resources Definition (CDR) instead of TripleO templates as in previous versions.

The main design change is how to serve OpenShift volumes.

Using Ceph across both platforms

The first option is to use the same external Ceph cluster between OpenStack and OpenShift, consolidating the Ceph investment by sharing the storage resources.

Red Hat OpenStack Services on OpenShift design with shared Ceph cluster


 In the above diagram, OpenStack is consuming Ceph as usual, and OpenShift uses OpenShift Data Foundation (ODF) external mode to connect to the same cluster. ODF external deploys the Ceph CSI drivers that allow OpenShift to provision persistent volumes from a Ceph cluster.

OpenStack and OpenShift use different Ceph pools and keys, but architects should review their cluster’s capacity and performance to anticipate any potential impact. It’s also possible to isolate the storage I/O of both platforms by customizing the CRUSH map and allowing data to be stored on different object storage daemons (OSDs).

The design outlined above shares the same Ceph cluster between OpenShift and OpenStack but they can be different clusters based on the use case.

Third-party or local storage for OpenShift and Ceph for OpenStack

In some cases, you do not want to share OpenStack and OpenShift data on the same cluster. As mentioned before, it’s possible to use another Ceph cluster but the capacity needed for the control plane services may not be enough to justify it.

Another option is to leverage OpenShift’s workers' local disks. To do so, OpenShift includes an out-of-the-box logical volume manager (LVM) based CSI operator called LVM Storage (LVMS). LVMS allows dynamic local provisioning of the persistent volumes via LVM on the workers' local disks. This has the advantage of using local direct disk performance at a minimum cost.

On the other hand, if the data being local to the worker, the pods relying on volumes cannot be evacuated to other workers. This is a limitation to consider, especially if OpenStack control services are deployed on more than three workers.

It is also possible to rely on an existing third-party backend using a certified CSI driver which would remove the 1:1 pinning between the pod and the volume but can increase the cost. Using ODF internally as an OpenShift storage solution is also an option.

The OpenStack integration to Ceph remains the same.

Red Hat OpenStack Services on OpenShift design with Ceph cluster for OpenStack and alternative solution for OpenShift

OpenStack with Ceph hyper-converged

Deploying Ceph hyper-converged with OpenStack compute nodes is a popular solution to combine both compute and storage resources on the same hardware, reducing the cost and hardware footprint.

The integration with Ceph does not differ from an external Ceph besides the fact that the compute and storage services are collocated.

The OpenShift storage options are more limited, however, as it is not possible to use the hyper-converged Ceph cluster to back OpenShift persistent volumes.

The options are the same as those outlined in the previous section—OpenShift can rely on LVMS to leverage the local worker disks or use an existing third-party backend with a certified CSI driver.

Red Hat OpenStack Services on OpenShift design with Ceph HyperConverged for OpenStack

OpenStack with third-party storage solutions

For environments that are not using Ceph, the same principle applies. The OpenStack integration does not change, the control and compute services are configured to use an external shared storage backend through iSCSI, FC, NFS, NVMe/TCP or other vendor-specific protocols. Cinder and Manila drivers are still used to integrate the storage solution with OpenStack.

On the OpenShift side, the options are to either use LVMS to leverage the local worker disks or use an existing third-party backend with a certified CSI driver. This third-party backend can be the same as the one used for OpenStack or a different one.

Red Hat OpenStack Services on OpenShift design with third party storage

Wrap up

As Red Hat OpenStack moves to a more modern OpenShift-based deployment model, new storage systems need to be considered. Red Hat OpenStack Services on OpenShift offers a broad set of options for storing the OpenStack control services and the end user’s data. Whether you’re using Ceph or not, and whether you want shared storage or to rely on local disks, the different supported combinations will match a vast set of use cases and requirements.

For more details on Red Hat OpenStack Services on OpenShift storage integration, please refer to our planning guide

Read the whole story
bernhardbock
3 days ago
reply
Share this story
Delete

How To Create Multi-Step Forms With Vanilla JavaScript And CSS

1 Share

Multi-step forms are a good choice when your form is large and has many controls. No one wants to scroll through a super-long form on a mobile device. By grouping controls on a screen-by-screen basis, we can improve the experience of filling out long, complex forms.

But when was the last time you developed a multi-step form? Does that even sound fun to you? There’s so much to think about and so many moving pieces that need to be managed that I wouldn’t blame you for resorting to a form library or even some type of form widget that handles it all for you.

But doing it by hand can be a good exercise and a great way to polish the basics. I’ll show you how I built my first multi-step form, and I hope you’ll not only see how approachable it can be but maybe even spot areas to make my work even better.

We’ll walk through the structure together. We’ll build a job application, which I think many of us can relate to these recent days. I’ll scaffold the baseline HTML, CSS, and JavaScript first, and then we’ll look at considerations for accessibility and validation.

I’ve created a GitHub repo for the final code if you want to refer to it along the way.

Our job application form has four sections, the last of which is a summary view, where we show the user all their answers before they submit them. To achieve this, we divide the HTML into four sections, each identified with an ID, and add navigation at the bottom of the page. I’ll give you that baseline HTML in the next section.

Navigating the user to move through sections means we’ll also include a visual indicator for what step they are at and how many steps are left. This indicator can be a simple dynamic text that updates according to the active step or a fancier progress bar type of indicator. We’ll do the former to keep things simple and focused on the multi-step nature of the form.,

We’ll focus more on the logic, but I will provide the code snippets and a link to the complete code at the end.

Let’s start by creating a folder to hold our pages. Then, create an index.html file and paste the following into it:

Looking at the code, you can see three sections and the navigation group. The sections contain form inputs and no native form validation. This is to give us better control of displaying the error messages because native form validation is only triggered when you click the submit button.

Next, create a styles.css file and paste this into it:

Open up the HTML file in the browser, and you should get something like the two-column layout in the following screenshot, complete with the current page indicator and navigation.

Now, create a script.js file in the same directory as the HTML and CSS files and paste the following JavaScript into it:

This script defines a method that shows and hides the section depending on the formStep values that correspond to the IDs of the form sections. It updates stepInfo with the current active section of the form. This dynamic text acts as a progress indicator to the user.

It then adds logic that waits for the page to load and click events to the navigation buttons to enable cycling through the different form sections. If you refresh your page, you will see that the multi-step form works as expected.

Let’s dive deeper into what the Javascript code above is doing. In the updateStepVisibility() function, we first hide all the sections to have a clean slate:

formSteps.forEach((step) => {
  document.getElementById(step).style.display = "none";
});

Then, we show the currently active section:

document.getElementById(formSteps[currentStep]).style.display = "block";`

Next, we update the text that indicators progress through the form:

stepInfo.textContent = `Step ${currentStep + 1} of ${formSteps.length}`;

Finally, we hide the Previous button if we are at the first step and hide the Next button if we are at the last section:

navLeft.style.display = currentStep === 0 ? "none" : "block";
navRight.style.display = currentStep === formSteps.length - 1 ? "none" : "block";

Let’s look at what happens when the page loads. We first hide the Previous button as the form loads on the first section:

document.addEventListener("DOMContentLoaded", () => {
navLeft.style.display = "none";
updateStepVisibility();

Then we grab the Next button and add a click event that conditionally increments the current step count and then calls the updateStepVisibility() function, which then updates the new section to be displayed:

navRight.addEventListener("click", () => {
  if (currentStep < formSteps.length - 1) {
    currentStep++;
    updateStepVisibility();
  }
});

Finally, we grab the Previous button and do the same thing but in reverse. Here, we are conditionally decrementing the step count and calling the updateStepVisibility():

navLeft.addEventListener("click", () => {
  if (currentStep > 0) {
    currentStep--;
    updateStepVisibility();
  }
});

Have you ever spent a good 10+ minutes filling out a form only to submit it and get vague errors telling you to correct this and that? I prefer it when a form tells me right away that something’s amiss so that I can correct it before I ever get to the Submit button. That’s what we’ll do in our form.

Our principle is to clearly indicate which controls have errors and give meaningful error messages. Clear errors as the user takes necessary actions. Let’s add some validation to our form. First, let’s grab the necessary input elements and add this to the existing ones:

const nameInput = document.getElementById("name");
const idNumInput = document.getElementById("idNum");
const emailInput = document.getElementById("email");
const birthdateInput = document.getElementById("birthdate")
const documentInput = document.getElementById("document");
const departmentInput = document.getElementById("department");
const termsCheckbox = document.getElementById("terms");
const skillsInput = document.getElementById("skills");

Then, add a function to validate the steps:

Here, we check if each required input has some value and if the email input has a valid input. Then, we set the isValid boolean accordingly. We also call a showError() function, which we haven’t defined yet.

Paste this code above the validateStep() function:

function showError(input, message) {
  const formControl = input.parentElement;
  const errorSpan = formControl.querySelector(".error-message");
  input.classList.add("error");
  errorSpan.textContent = message;
}

Now, add the following styles to the stylesheet:

If you refresh the form, you will see that the buttons do not take you to the next section till the inputs are considered valid:

Finally, we want to add real-time error handling so that the errors go away when the user starts inputting the correct information. Add this function below the validateStep() function:

This function clears the errors if the input is no longer invalid by listening to input and change events then calling a function to clear the errors. Paste the clearError() function below the showError() one:

function clearError(input) {
  const formControl = input.parentElement;
  const errorSpan = formControl.querySelector(".error-message");
  input.classList.remove("error");
  errorSpan.textContent = "";
}

And now the errors clear when the user types in the correct value:

The multi-step form now handles errors gracefully. If you do decide to keep the errors till the end of the form, then at the very least, jump the user back to the erroring form control and show some indication of how many errors they need to fix.

In a multi-step form, it is valuable to show the user a summary of all their answers at the end before they submit and to offer them an option to edit their answers if necessary. The person can’t see the previous steps without navigating backward, so showing a summary at the last step gives assurance and a chance to correct any mistakes.

Let’s add a fourth section to the markup to hold this summary view and move the submit button within it. Paste this just below the third section in index.html:

Then update the formStep in your Javascript to read:

const formSteps = ["one", "two", "three", "four"];

Finally, add the following classes to styles.css:

.summary-section {
  display: flex;
  align-items: center;
  gap: 10px;
}

.summary-section p:first-child {
  width: 30%;
  flex-shrink: 0;
  border-right: 1px solid var(--secondary-color);
}

.summary-section p:nth-child(2) {
  width: 45%;
  flex-shrink: 0;
  padding-left: 10px;
}

.edit-btn {
  width: 25%;
  margin-left: auto;
  background-color: transparent;
  color: var(--primary-color);
  border: .7px solid var(--primary-color);
  border-radius: 5px;
  padding: 5px;
}

.edit-btn:hover {
  border: 2px solid var(--primary-color);
  font-weight: bolder;
  background-color: transparent;
}

Now, add the following to the top of the script.js file where the other consts are:

const nameVal = document.getElementById("name-val");
const idVal = document.getElementById("id-val");
const emailVal = document.getElementById("email-val");
const bdVal = document.getElementById("bd-val")
const cvVal = document.getElementById("cv-val");
const deptVal = document.getElementById("dept-val");
const skillsVal = document.getElementById("skills-val");
const editButtons = 
  "name-edit": 0,
  "id-edit": 0,
  "email-edit": 0,
  "bd-edit": 0,
  "cv-edit": 1,
  "dept-edit": 1,
  "skills-edit": 2
};

Then add this function in scripts.js:

function updateSummaryValues() {
  nameVal.textContent = nameInput.value;
  idVal.textContent = idNumInput.value;
  emailVal.textContent = emailInput.value;
  bdVal.textContent = birthdateInput.value;

  const fileName = documentInput.files[0]?.name;
  if (fileName) 
  const extension = fileName.split(".").pop();
  const baseName = fileName.split(".")[0];
  const truncatedName = baseName.length > 10 ? baseName.substring(0, 10) + "..." : baseName;
  cvVal.textContent = `${truncatedName}.${extension}`;
  } else {
    cvVal.textContent = "No file selected";
  }

  deptVal.textContent = departmentInput.value;
  skillsVal.textContent = skillsInput.value || "No skills submitted";
}

This dynamically inserts the input values into the summary section of the form, truncates the file names, and offers a fallback text for the input that was not required.

Then update the updateStepVisibility() function to call the new function:

function updateStepVisibility() {
  formSteps.forEach((step) => {
    document.getElementById(step).style.display = "none";
  });

  document.getElementById(formSteps[currentStep]).style.display = "block";
  stepInfo.textContent = `Step ${currentStep + 1} of ${formSteps.length}`;
  if (currentStep === 3) {
    updateSummaryValues();
  }

  navLeft.style.display = currentStep === 0 ? "none" : "block";
  navRight.style.display = currentStep === formSteps.length - 1 ? "none" : "block";
}

Finally, add this to the DOMContentLoaded event listener:

Object.keys(editButtons).forEach((buttonId) => {
  const button = document.getElementById(buttonId);
  button.addEventListener("click", (e) => {
    currentStep = editButtons[buttonId];
    updateStepVisibility();
  });
});

Running the form, you should see that the summary section shows all the inputted values and allows the user to edit any before submitting the information:

And now, we can submit our form:

form.addEventListener("submit", (e) => {
  e.preventDefault();

  if (validateStep(2)) {
    alert("Form submitted successfully!");
    form.reset();
    currentFormStep = 0;
    updateStepVisibility();
}
});

Our multi-step form now allows the user to edit and see all the information they provide before submitting it.

Making multi-step forms accessible starts with the basics: using semantic HTML. This is half the battle. It is closely followed by using appropriate form labels.

Other ways to make forms more accessible include giving enough room to elements that must be clicked on small screens and giving meaningful descriptions to the form navigation and progress indicators.

Offering feedback to the user is an important part of it; it’s not great to auto-dismiss user feedback after a certain amount of time but to allow the user to dismiss it themselves. Paying attention to contrast and font choice is important, too, as they both affect how readable your form is.

Let’s make the following adjustments to the markup for more technical accessibility:

  1. Add aria-required="true" to all inputs except the skills one. This lets screen readers know the fields are required without relying on native validation.
  2. Add role="alert" to the error spans. This helps screen readers know to give it importance when the input is in an error state.
  3. Add role="status" aria-live="polite" to the .stepInfo. This will help screen readers understand that the step info keeps tabs on a state, and the aria-live being set to polite indicates that should the value change, it does not need to immediately announce it.

In the script file, replace the showError() and clearError() functions with the following:

function showError(input, message) {
  const formControl = input.parentElement;
  const errorSpan = formControl.querySelector(".error-message");
  input.classList.add("error");
  input.setAttribute("aria-invalid", "true");
  input.setAttribute("aria-describedby", errorSpan.id);
  errorSpan.textContent = message;
  }

  function clearError(input) {
  const formControl = input.parentElement;
  const errorSpan = formControl.querySelector(".error-message");
  input.classList.remove("error");
  input.removeAttribute("aria-invalid");
  input.removeAttribute("aria-describedby");
  errorSpan.textContent = "";
}

Here, we programmatically add and remove attributes that explicitly tie the input with its error span and show that it is in an invalid state.

Finally, let’s add focus on the first input of every section; add the following code to the end of the updateStepVisibility() function:

const currentStepElement = document.getElementById(formSteps[currentStep]);
const firstInput = currentStepElement.querySelector(
  "input, select, textarea"
);

if (firstInput) {
  firstInput.focus();
}

And with that, the multi-step form is much more accessible.

There we go, a four-part multi-step form for a job application! As I said at the top of this article, there’s a lot to juggle — so much so that I wouldn’t fault you for looking for an out-of-the-box solution.

But if you have to hand-roll a multi-step form, hopefully now you see it’s not a death sentence. There’s a happy path that gets you there, complete with navigation and validation, without turning away from good, accessible practices.

And this is just how I approached it! Again, I took this on as a personal challenge to see how far I could get, and I’m pretty happy with it. But I’d love to know if you see additional opportunities to make this even more mindful of the user experience and considerate of accessibility.

Here are some relevant links I referred to when writing this article:

Read the whole story
bernhardbock
30 days ago
reply
Share this story
Delete

seddonym/import-linter: Import Linter allows you to define and enforce rules for the internal and external imports within your Python project.

1 Share
Read the whole story
bernhardbock
33 days ago
reply
Share this story
Delete

Publishing a simple client-side JavaScript package to npm with GitHub Actions

1 Share

Here's what I learned about publishing a single file JavaScript package to NPM for my Prompts.js project.

The code is in simonw/prompts-js on GitHub. The NPM package is prompts-js.

A simple single file client-side package

For this project, I wanted to create an old-fashioned JavaScript file that you could include in a web page using a <script> tag. No TypeScript, no React JSK, no additional dependencies, no build step.

I also wanted to ship it to NPM, mainly so it would be magically available from various CDNs.

I think I've boiled that down to about as simple as I can get. Here's the package.json file:

{
  "name": "prompts-js",
  "version": "0.0.4",
  "description": "async alternatives to browser alert() and prompt() and confirm()",
  "main": "index.js",
  "homepage": "https://github.com/simonw/prompts-js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Simon Willison",
  "license": "Apache-2.0",
  "repository": {
    "type": "git",
    "url": "git+https://github.com/simonw/prompts-js.git"
  },
  "keywords": [
    "alert",
    "prompt",
    "confirm",
    "async",
    "promise",
    "dialog"
  ],
  "files": [
    "index.js",
    "README.md",
    "LICENSE"
  ]
}

That "scripts.test" block probably isn't necessary. The keywords are used when you deploy to NPM, and the files block tells NPM which files to include in the package.

The "repository" block is used by NPM's provenance statements. Don't worry too much about these - they're only needed if you use the npm publish --provenance option later on.

Really the three most important keys here are "name", which needs to be a unique name on NPM, "version" and that "main" key. I set "main" to index.js.

All that's needed now is that index.js file - and optionally the README.md and LICENSE files if we want to include them in the package. The README.md ends up displayed on the NPM listing page so it's worth including.

Here's my index.js file. It starts and ends like this (an IFFE):

const Prompts = (function () {
  // ...
  return { alert, confirm, prompt };
})();

Publishing to NPM

With these pieces in place, running npm publish in the root of the project will publish the package to NPM - after first asking you to sign into your NPM account.

Automating this with GitHub Actions

I use GitHub Actions that trigger on any release to publish all of my Python projects to PyPI. I wanted to do the same for this JavaScript project.

I found this example in the GitHub documentation which gave me most of what I needed. This is in .github/workflows/publish.yml:

name: Publish Package to npmjs
on:
  release:
    types: [published]
jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20.x'
          registry-url: 'https://registry.npmjs.org'
      - run: npm publish --provenance --access public
        env:
          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}

There's that --provenance option which only works if you have the repository block set up in your package.json.

This needs a secret called NPM_TOKEN to be set up in the GitHub repository settings.

It took me a few tries to get this right. It needs to be a token created on the NPM website using the Access Tokens menu item, then Generate New Token -> Classic Token. As far as I can tell the new "Granular Access Token" format doesn't work for this as it won't allow you to create a token that never expires, and I never want to have to remember to update the secret in the future.

An "Automation" token should do the trick here - it bypasses 2-factor authentication when publishing.

Set that in GitHub Actions as a secret called NPM_TOKEN and now you can publish a new version of your package to NPM by doing the following:

  1. Update the version number in package.json
  2. Create a new release on GitHub with a tag that matches the version number
Read the whole story
bernhardbock
37 days ago
reply
Share this story
Delete

Simple trick to save environment and money when using GitHub Actions

1 Share

We recently onboarded Nikita Sivukhin as a new member of our Engineering team at Turso. He immediately started to have meaningful contributions to our Native Vector Search but something else triggered me to write this article. In addition to working on his main task, Nikita started to poke around our codebase and to fix anything he found worth tackling. This is a great proactive approach which I highly recommend to any software engineer. One thing improved by Nikita was our GitHub Actions setup to avoid running jobs that are no longer needed. This is great because GitHub Actions not only consume electricity when they run but also either cost money when used for private repositories or have some usage quota for open source projects.

#What's the problem

We use GitHub Actions for our CI/CD at Turso. Both on open source projects and the ones that are private. Among other things, we run GitHub Actions on our Pull Requests. Some of those actions are pretty heavy and can take considerable amount of time. Rust compilation has its share but we also run all sorts of tests spanning from unit tests to end-to-end tests. It isn't uncommon for Pull Request to be updated before CI/CD is finished for the previous version. Unfortunately, GitHub does not cancel GitHub Actions for a stale version of the code and those tasks keep running until they either fail or fully finish. This is a problem because those old runs of CI/CD consume resources like electricity and GitHub Action runners even though no one is interested in the outcome of the run any more.

#Solution

This problem can be easily solved in a universal way. If you're running your GitHub Actions on pull_request: target then you just need to add the following snipped to the definition of your GitHub workflow:

concurrency:
  group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
  cancel-in-progress: true

And voilà, GitHub will start to cancel all old GitHub Actions runs that are stale after a new version of the Pull Request was uploaded. You can see the solution in wider context in Nikita's Pull Request that added this to LibSQL GitHub repository.

#Effects

As a consequence of this change you will start seeing new result type in your GitHub Actions summary page. There will be not only green circle with a tick and red circle with an X but also a grey octagon with an exclamation point that means a task was cancelled. Below is a screenshot from GitHub Actions summary page of LibSQL repository

During the first week after Nikita's Pull Request had been merged, 56 tasks were cancelled in LibSQL repository alone.

#Conclusion

I hope that this short article was able to convince you that if you're using GitHub Actions for your CI/CD then you can easily become more environment friendly and possibly save some money on GitHub bills.

Read the whole story
bernhardbock
37 days ago
reply
Share this story
Delete
Next Page of Stories