Skip to content
Menu
Open World News Open World News
  • Privacy Policy
Open World News Open World News

Author: Michael G

Mario Hernandez: Building a modern Drupal theme with Storybook

Posted on April 14, 2024 by Michael G

Building a custom Drupal theme nowadays is a more complex process than it used to be. Most themes require some kind of build tool such as Gulp, Grunt, Webpack or others to automate many of the repeatitive tasks we perform when working on the front-end. Tasks like compiling and minifying code, compressing images, linting code, and many more. As Atomic Web Design became a thing, things got more complicated because now if you are building components you need a styleguide or Design System to showcase and maintain those components. One of those design systems for me has been Patternlab. I started using Patternlab in all my Drupal projects almost ten years ago with great success. In addition, Patternlab has been the design system of choice at my place of work but one of my immediate tasks was to work on migrating to a different design system. We have a small team but were very excited about the challenge of finding and using a more modern and robust design system for our large multi-site Drupal environment.

Disclaimer: Due to technical restrictions and project requirements, we were not able to use Single Directory Components or SDC, nor the Storybook module. Our front-end environment was custom built from the ground up.

Enter Storybook

After looking a various options for a design system, Storybook seemed to be the right choice for us for a couple of reasons: one, it has been around for about 10 years and during this time it has matured significantly, and two, it has become a very popular option in the Drupal ecosystem. In some ways, Storybook follows the same model as Drupal, it has a pretty active community and a very healthy ecosystem of plugins to extend its core functionality.

Storybook looks very promising as a design system for Drupal projects and with the recent release of Single Directory Components or SDC, and the new Storybook module, we think things can only get better for Drupal front-end development. Unfortunately for us, technical limitations in combination with our specific requirements, prevented us from using SDC or the Storybook module. Instead, we built our environment from scratch with a stand-alone integration of Storybook 8.

Our process and requirements

In choosing Storybook, we went through a rigorous research and testing process to ensure it will not only solve our immediate problems with our current environment, but it will be around as a long term solution. As part of this process, we also tested several available options like Emulsify and Gesso which would be great options for anyone looking for a ready-to-go system out of the box. Some of our requirements included:

1. No components refactoring

The first and non-negotiable requirement was to be able to migrate components from Patternlab to a new design system with the least amount of refactoring as possible. We have a decent amount of components which have been built within the last year and the last thing we wanted was to have to rebuild them again because we are switching design system.

2. A new Front-end build workflow

I personally have been faithful to Gulp as a front-end build tool for as long as I can remember because it did everything I needed done in a very efficient manner. The Drupal project we maintain also used Gulp, but as part of this migration, we wanted to see what other options were out there that could improve our workflow. The obvious choice seemed to be Webpack, but as we looked closer into this we learned about ViteJS, “The Next Genration Frontend Tooling“. Vite delivers on its promise of being “blazing fast“, and its ecosystem is great and growing, so we went with it.

3. No more Sass in favor of PostCSS

CSS has drastically improved in recent years. It is now possible with plain CSS, to do many of the things you used to be able to only do with Sass or similar CSS Preprocessor. Eliminating Sass from our workflow meant we would also be able to get rid of many other node dependencies related to Sass. The goal for this project was to use plain CSS in combination with PostCSS and one bonus of using Vite is that Vite offers PostCSS processing out of the box without additional plugins or dependencies. Ofcourse if you want to do more advance PostCSS processing you will probably need some external dependencies.

Building a new Drupal theme with Storybook

Let’s go over the steps to building the base of your new Drupal theme with ViteJS and Storybook. This will be at a high-level to callout only the most important and Drupal-related parts. This process will create a brand new theme. If you already have a theme you would like to use, make the appropriate changes to the instructions.

1. Setup Storybook with ViteJS

ViteJS

  • In your Drupal project, navigate to the theme’s directory (i.e. /web/themes/custom/)
  • Run the following command:
npm create vite@latest my_theme
# replace my_theme with a name of your choice.
  • When prompted, select the framework of your choice, for us the framework is React.
  • When prompted, select the variant for your project, for us this is JavaScript

After the setup finishes you will have a basic Vite project running.

Storybook

  • Be sure your system is running NodeJS version 18 or higher
  • Inside the newly created theme, run this command:
npx storybook@latest init --type react
  • After installation completes, you will have a new Storybook instance running
  • If Storybook didn’t start on its own, start it by running:
npm run storybook

TwigJS

Twig templates are server-side templates which are normally rendered with TwigPHP to HTML by Drupal, but Storybook is a JS tool. TwigJS is the JS-equivalent of TwigPHP so that Storybook understands Twig. Let’s install all dependencies needed for Storybook to work with Twig.

  • If Storybook is still running, press Ctrl + C to stop it
  • Then run the following command:
npm i -D vite-plugin-twig-drupal html-react-parser twig-drupal-filters @modyfi/vite-plugin-yaml
  • vite-plugin-twig-drupal: If you are using Vite like we are, this is a Vite plugin that handles transforming twig files into a Javascript function that can be used with Storybook. This plugin includes the following:
    • Twig or TwigJS: This is the JavaScript implementation of the Twig PHP templating language. This allows Storybook to understand Twig.
      Note: TwigJS may not always be in sync with the version of Twig PHP in Drupal and you may run into issues when using certain Twig functions or filters, however, we are adding other extensions that may help with the incompatability issues.
    • drupal attribute: Adds the ability to work with Drupal attributes.
  • twig-drupal-filters: TwigJS implementation of Twig functions and filters.
  • html-react-parser: This extension is key for Storybook to parse HTML code into react elements.
  • @modifi/vite-plugin-yaml: Transforms a YAML file into a JS object. This is useful for passing the component’s data to React as args.

ViteJS configuration

Update your vite.config.js so it makes use of the new extensions we just installed as well as configuring the namesapces for our components.

import { defineConfig } from "vite"
import yml from '@modyfi/vite-plugin-yaml';
import twig from 'vite-plugin-twig-drupal';
import { join } from "node:path"
export default defineConfig({
  plugins: [
    twig({
      namespaces: {
        components: join(__dirname, "./src/components"),
        // Other namespaces maybe be added.
      },
    }),
    // Allows Storybook to read data from YAML files.
    yml(),
  ],
})

Storybook configuration

Out of the box, Storybook comes with main.js and preview.js inside the .storybook directory. These two files is where a lot of Storybook’s configuration is done. We are going to define the location of our components, same location as we did in vite.config.js above (we’ll create this directory shortly). We are also going to do a quick config inside preview.js for handling drupal filters.

  • Inside .storybook/main.js file, update the stories array as follows:
stories: [
  "../src/components/**/*.mdx",
  "../src/components/**/*.stories.@(js|jsx|mjs|ts|tsx)",
],
  • Inside .storybook/preview.js, update it as follows:
/** @type { import('@storybook/react').Preview } */
import Twig from 'twig';
import drupalFilters from 'twig-drupal-filters';

function setupFilters(twig) {
  twig.cache();
  drupalFilters(twig);
  return twig;
}

setupFilters(Twig);

const preview = {
  parameters: {
    controls: {
      matchers: {
        color: /(background|color)$/i,
        date: /Date$/i,
      },
    },
  },
};

export default preview;

Creating the components directory

  • If Storybook is still running, press Ctrl + C to stop it
  • Inside the src directory, create the components directory. Alternatively, you could rename the existing stories directory to components.

Creating your first component

With the current system in place we can start building components. We’ll start with a very simple component to try things out first.

  • Inside src/components, create a new directory called title
  • Inside the title directory, create the following files: title.yml and title.twig

Writing the code

  • Inside title.yml, add the following:
---
level: 2
modifier: 'title'
text: 'Welcome to your new Drupal theme with Storybook!'
url: 'https://mariohernandez.io'
  • Inside title.twig, add the following:
<h{{ level|default(2) }}{% if modifier %} class="{{ modifier }}"{% endif %}>
  {% if url %}
    <a href="{{ url }}">{{ text }}</a>
  {% else %}
    <span>{{ text }}</span>
  {% endif %}
</h{{ level|default(2) }}>

We have a simple title component that will print a title of anything you want. The level key allows us to change the heading level of the title (i.e. h1, h2, h3, etc.), and the modifier key allows us to pass a modifier class to the component, and the url will be helpful when our title needs to be a link to another page or component.

Currently the title component is not available in storybook. Storybook uses a special file to display each component as a story, the file name is component-name.stories.jsx.

  • Inside title create a file called title.stories.jsx
  • Inside the stories file, add the following:
/**
 * First we import the `html-react-parser` extension to be able to
 * parse HTML into react.
 */
import parse from 'html-react-parser';

/**
 * Next we import the component's markup and logic (twig), data schema (yml),
 * as well as any styles or JS the component may use.
 */
import title from './title.twig';
import data from './title.yml';

/**
 * Next we define a default configuration for the component to use.
 * These settings will be inherited by all stories of the component,
 * shall the component have multiple variations.
 * `component` is an arbitrary name assigned to the default configuration.
 * `title` determines the location and name of the story in Storybook's sidebar.
 * `render` uses the parser extension to render the component's html to react.
 * `args` uses the variables defined in title.yml as react arguments.
 */
const component = {
  title: 'Components/Title',
  render: (args) => parse(title(args)),
  args: { ...data },
};

/**
 * Export the Title and render it in Storybook as a Story.
 * The `name` key allows you to assign a name to each story of the component.
 * For example: `Title`, `Title dark`, `Title light`, etc.
 */
export const TitleElement = {
  name: 'Title',
};

/**
 * Finally export the default object, `component`. Storybook/React requires this step.
 */
export default component;
  • If Storybook is running you should see the title story. See example below:
  • Otherwise start Storybook by running:
npm run storybook

With Storybook running, the title component should look like the image below:

Mario Hernandez: Building a modern Drupal theme with Storybook
The controls highlighted at the bottom of the title allow you to change the values of each of the fields for the title.

I wanted to start with the simplest of components, the title, to show how Storybook, with help from the extensions we installed, understands Twig. The good news is that the same approach we took with the title component works on even more complex components. Even the React code we wrote does not change much on large components.

In the next blog post, we will build more components that nest smaller components, and we will also add Drupal related parts and configuration to our theme so we can begin using the theme in a Drupal site. Finally, we will integrate the components we built in Storybook with Drupal so our content can be rendered using the component we’re building. Stay tuned. For now, if you want to grab a copy of all the code in this post, you can do so below.

Download the code

Resources

  • Storybook docs
  • How to write stories
  • Single Directory Components
  • Storybook Drupal module

In closing

Getting to this point was a team effort and I’d like to thank Chaz Chumley, a Senior Software Engineer, who did a lot of the configuration discussed in this post. In addition, I am thankful to the Emulsify and Gesso teams for letting us pick their brains during our research. Their help was critical in this process.

I hope this was helpful and if there is anything I can help you with in your journey of a Storybook-friendly Drupal theme, feel free to reach out.

Simon Josefsson: Reproducible and minimal source-only tarballs

Posted on April 14, 2024 by Michael G

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn’t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are hopefully reproducible on several architectures.

What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader…

This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work.

Let’s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:

git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make distcheck
gpg -b libntlm-1.8.tar.gz

The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980’s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened.

The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs.

The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs — although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like.

Publishing minimal source-only tarballs enable easier auditing of a project’s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover.

With all that discussion and rationale out of the way, let’s return to the release process. I have added another step here:

make srcdist
gpg -b libntlm-1.8-src.tar.gz

Now the release is ready. I publish these four files in the Libntlm’s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:

91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz

So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:

podman run -it --rm ubuntu:22.04
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make dist srcdist
sha256sum libntlm-*.tar.gz

You should see the exact same SHA256 checksum values. Hooray!

This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed.

Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container – replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:

podman run -it --rm almalinux:8
dnf update -y
dnf install -y make wget gcc
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
tar xfa libntlm-1.8.tar.gz
cd libntlm-1.8
./configure
make dist
sha256sum libntlm-1.8.tar.gz

The source-only minimal tarball can be regenerated on Debian 11:

podman run -it --rm debian:11
apt-get update
apt-get install -y --no-install-recommends make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
make -f cfg.mk srcdist
sha256sum libntlm-1.8-src.tar.gz 

As the Magnus Opus or chef-d’œuvre, let’s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 – replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.

podman run -it --rm docker.io/kpengboy/trisquel:11.0
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
tar xfa libntlm-1.8-src.tar.gz
cd libntlm-v1.8
./bootstrap
./configure
make dist
sha256sum libntlm-1.8.tar.gz

Yay! You should now have great confidence in that the release artifacts correspond to what’s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts.

I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot.

Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I’ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab’s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the ‘v’ in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah’s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah’s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I’ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion.

Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs – even though the content would.

Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm’s ChangeLog file died on the surgery table here.

So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian’s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm’s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team.

Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work — see my notes in gnulib’s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately.

One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don’t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We’ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn’t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don’t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn’t require the git tool to extract the necessary files, but none that I found practical — ideas welcome!

Finally some brief notes on how this was implementated. Enabling bootstrappable source-only minimal tarballs via gnulib’s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf.

The srcdist make rule is simple:

git archive --prefix=libntlm-v1.8/ -o libntlm-v1.8.tar.gz HEAD

Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to a timestamp found in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn’t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I’m using now is fairly similar to the one I suggested over a year ago.

Doing continous testing of all this is critical to make sure things don’t regress. Libntlm’s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa.

If this way of working plays out well, I hope to implement it in other projects too.

What do you think? Happy Hacking!

VMS Software guts its community licensing program

Posted on April 14, 2024 by Michael G
VMS Software, the company developing OpenVMS, has announced some considerable changes to its licensing program for hobbyists, and the news is, well, bad. The company claims that demand for hobbyist licenses has been so high that they were unable to process requests fast enough, and as such, that the program is not delivering the “intended benefits”. Despite this apparent high demand, contributions from the community, such as writing and porting open-source software, creating wiki articles, and providing assistance on their forums, “has not matched the scale of the program”. Now, I want to stop them right here. The OpenVMS hobbyist program was riddled with roadblocks, restrictions, unclear instructions, restrictive licensing, and similar barriers to entry. As such, it’s entirely unsurprising that the community around a largely relic of an operating system – with all due respect – simply hasn’t grown enough to become self-sustainable. The blame here lies entirely with VMS Software itself, and not at all with whatever community managed to form around OpenVMS, despite the countless restrictions. So, you’d expect them to expand the program, right? Perhaps embrace open source, or make the various versions and releases more freely and easily available? No, they’re going to do the exact opposite. To address not getting enough out of their community, they’re going to limit that community’s options even more. First, they’re ending the community program for the Alpha and Itanium (which they call Integrity, since it covers HP’s Integrity machines), effective immediately, so they won’t be granting any new licenses for these architectures. Existing licenses will continue to work until 2025. Effective immediately, we will discontinue offering new community licenses for non-commercial use for Alpha and Integrity. Existing holders of community licenses for these architectures will get updates for those licenses and retain their access to the Service Portal until March 2025 for Alpha and December 2025 for Integrity. All outstanding requests for Alpha and Integrity community licenses will be declined. ↫ VMS Software announcement This sucks, but with both Alpha and Itanium being end-of-life, there’s at least some arguments that can be made for ending the program for these architectures. Much less defensible are the changes to x86-64 community licensing, which basically just come down to more bureaucracy for both users and VMS Software. For x86 community licenses, we will be transitioning to a package-based distribution model (which will also replace the student license that used to be distributed as a FreeAXP emulator package). A vmdk of a system disk with OpenVMS V9.2-2 and compilers installed and licensed will be provided, along with instructions to create a virtual machine and the SYSTEM password. The license installed on that system will be valid for one year, at which point we will provide a new package. While this may entail some inconvenience for users, it enables us to continue offering licenses at no cost, ensuring accessibility without compromising our sustainability. ↫ VMS Software announcement The vibe I’m getting from this announcement is that by offering some rudimentary and complicated form of community licensing, OpenVMS hoped to gain the advantages of a vibrant open source community, without all the downsides. They must’ve hoped that by throwing the community a bone, they’d get them to do a bunch of work for them, and now that this is not panning out, they’re taking their ball and going home. That’s entirely within their right, of course, but I doubt these changes are going to make anyone more excited to dig into OpenVMS. All of this feels eerily similar to the attempts by QNX – before being acquired by BlackBerry – to do pretty much the same thing. QNX also tried a similar model where you needed to sign up and jump through a bunch of hoops to get QNX releases, and the company steeped it in talks of building a community, but of course it didn’t pan out because people are simply not interested in a one-way relationship where you’re working for free for a corporation who then takes your stuff and uses it to sell their, in this case, operating system. This particular mistake is made time and time again, and it seems VMS Software simply did not learn this lesson.

Product link https://amzn.to/3JjO88i

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source T-fal Ultimate Hard Anodized Nonstick Cookware Set 12 Piece Oven Broiler Safe 600F Pots and Pans, Dishwasher Safe Black Go to Source

La meilleure méthode pour amplifier vos blocs en programmation avec JavaScript

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source Go to Source

Sem dizer como, Javier Milei oferece ajuda a Musk em investigações no Brasil

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source O presidente da Argentina, Javier Milei, foi ao estado do Texas, nos Estados Unidos, para se encontrar com o magnata Elon Musk. Durante o encontro, o líder argentino afirmou que pretende ajudar o bilionário no embate com o ministro do Supremo Tribunal Federal, Alexandre de Moraes. Assista na íntegra: https://youtube.com/live/fGSFqrMp2b8…

Have You Ever Seen The Rain – Creedence Clearwater Revival #harpmanofperth #dailymotion…

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source Have You Ever Seen The Rain (1971) @TheOfficialCCR |Harmonica #cover #haveyoueverseentherain #jackquaid @JohnFogerty #harmonica #cover #haveyoueverseentherain #strummingpattern #rhythmguitar #instrumental @HarpManOfPerth #harpmanofperth Harmonica Cover – Have You Ever Seen The RainSingles by Creedence Clearwater Revival #pendulum #album Guitar Chords for Have you ever seen the rainCreedence Clearwater Revival – Have You…

24 Oras Weekend Part 4- Kaskaserong tsuper; Pulitika sa pagsuspende kay gov?;Espada sa paliga; atbp.

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source 24 Oras Weekend is GMA Network’s flagship newscast, anchored by Ivan Mayrina and Pia Arcangel. It airs on GMA-7, Saturdays and Sundays at 5:30 PM (PHL Time). For more videos from 24 Oras Weekend, visit http://www.gmanews.tv/24orasweekend. #GMAIntegratedNews #KapusoStream Breaking news and stories from the Philippines and abroad:GMA Integrated News Portal:…

The upcoming James Bond game, ‘Project 007’, promises to be the “best 007 game ever made”

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source ‘Project 007’ has a new missions director in Rodrigo Santoro. Go to Source

Best Web App Development Agency In india

Posted on April 13, 2024 by Michael G

Video by via Dailymotion Source Dahooks is India’s Leading web development agency that offers top-notch web application development services? Look no further! Our professional team of experienced developers is here to transform your ideas into exceptional online experiences. INDIAFor More Information :Visit: https://dahooks.com/web-devCall: +91-7827150289Mail at : info@dahooks.comA-83 Sector 63 Noida (UP) 201301 Go to Source

  • Previous
  • 1
  • …
  • 501
  • 502
  • 503
  • 504
  • 505
  • 506
  • 507
  • …
  • 1,531
  • Next

Recent Posts

  • [TUT] LoRa & LoRaWAN – MikroTik wAP LR8 kit mit The Things Network verbinden [4K | DE]
  • Mercado aguarda Powell e olha Trump, dados e Haddad | MINUTO TOURO DE OURO – 11/02/25
  • Dan Levy Gets Candid About Learning How To Act Differently After Schitt’s Creek: ‘It’s Physically…
  • Building a Rock Shelter & Overnight Stay in Heavy Snow 🏕️⛰️
  • Les milliardaires Elon Musk et Xavier Niel s’insultent copieusement

Categories

  • Android
  • Linux
  • News
  • Open Source
©2025 Open World News | Powered by Superb Themes
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
Cookie SettingsAccept All
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT