Drupal no longer releases a new version of Core when an upstream dependency fixes a security vulnerability. It is the responsibility of site maintainers to keep track of security advisories for all such dependent libraries. That is no small task, and a way to automate this is needed. This post looks into how this can be done.
Category: News
Andy Wingo: coarse or lazy?
sweeping, coarse and lazy
One of the things that had perplexed me about the Immix collector was how to effectively defragment the heap via evacuation while keeping just 2-3% of space as free blocks for an evacuation reserve. The original Immix paper states:
To evacuate the object, the collector uses the same allocator as the mutator, continuing allocation right where the mutator left off. Once it exhausts any unused recyclable blocks, it uses any completely free blocks. By default, immix sets aside a small number of free blocks that it never returns to the global allocator and only ever uses for evacuating. This headroom eases defragmentation and is counted against immix’s overall heap budget. By default immix reserves 2.5% of the heap as compaction headroom, but […] is fairly insensitive to values ranging between 1 and 3%.
To Immix, a “recyclable” block is partially full: it contains surviving data from a previous collection, but also some holes in which to allocate. But when would you have recyclable blocks at evacuation-time? Evacuation occurs as part of collection. Collection usually occurs when there’s no more memory in which to allocate. At that point any recyclable block would have been allocated into already, and won’t become recyclable again until the next trace of the heap identifies the block’s surviving data. Of course after the next trace they could become “empty”, if no object survives, or “full”, if all lines have survivor objects.
In general, after a full allocation cycle, you don’t know much about the heap. If you could easily know where the live data and the holes were, a garbage collector’s job would be much easier 🙂 Any algorithm that starts from the assumption that you know where the holes are can’t be used before a heap trace. So, I was not sure what the Immix paper is meaning here about allocating into recyclable blocks.
Thinking on it again, I realized that Immix might trigger collection early sometimes, before it has exhausted the previous cycle’s set of blocks in which to allocate. As we discussed earlier, there is a case in which you might want to trigger an early compaction: when a large object allocator runs out of blocks to decommission from the immix space. And if one evacuating collection didn’t yield enough free blocks, you might trigger the next one early, reserving some recyclable and empty blocks as evacuation targets.
when do you know what you know: lazy and eager
Consider a basic question, such as “how many bytes in the heap are used by live objects”. In general you don’t know! Indeed you often never know precisely. For example, concurrent collectors often have some amount of “floating garbage” which is unreachable data but which survives across a collection. And of course you don’t know the difference between floating garbage and precious data: if you did, you would have collected the garbage.
Even the idea of “when” is tricky in systems that allow parallel mutator threads. Unless the program has a total ordering of mutations of the object graph, there’s no one timeline with respect to which you can measure the heap. Still, Immix is a stop-the-world collector, and since such collectors synchronously trace the heap while mutators are stopped, these are times when you can exactly compute properties about the heap.
Let’s retake the question of measuring live bytes. For an evacuating semi-space, knowing the number of live bytes after a collection is trivial: all survivors are packed into to-space. But for a mark-sweep space, you would have to compute this information. You could compute it at mark-time, while tracing the graph, but doing so takes time, which means delaying the time at which mutators can start again.
Alternately, for a mark-sweep collector, you can compute free bytes at sweep-time. This is the phase in which you go through the whole heap and return any space that wasn’t marked in the last collection to the allocator, allowing it to be used for fresh allocations. This is the point in the garbage collection cycle in which you can answer questions such as “what is the set of recyclable blocks”: you know what is garbage and you know what is not.
Though you could sweep during the stop-the-world pause, you don’t have to; sweeping only touches dead objects, so it is correct to allow mutators to continue and then sweep as the mutators run. There are two general strategies: spawn a thread that sweeps as fast as it can (concurrent sweeping), or make mutators sweep as needed, just before they allocate (lazy sweeping). But this introduces a lag between when you know and what you know—your count of total live heap bytes describes a time in the past, not the present, because mutators have moved on since then.
For most collectors with a sweep phase, deciding between eager (during the stop-the-world phase) and deferred (concurrent or lazy) sweeping is very easy. You don’t immediately need the information that sweeping allows you to compute; it’s quite sufficient to wait until the next cycle. Moving work out of the stop-the-world phase is a win for mutator responsiveness (latency). Usually people implement lazy sweeping, as it is naturally incremental with the mutator, naturally parallel for parallel mutators, and any sweeping overhead due to cache misses can be mitigated by immediately using swept space for allocation. The case for concurrent sweeping is less clear to me, but if you have cores that would otherwise be idle, sure.
eager coarse sweeping
Immix is interesting in that it chooses to sweep eagerly, during the stop-the-world phase. Instead of sweeping irregularly-sized objects, however, it sweeps over its “line mark” array: one byte for each 128-byte “line” in the mark space. For 32 kB blocks, there will be 256 bytes per block, and line mark bytes in each 4 MB slab of the heap are packed contiguously. Therefore you get relatively good locality, but this just mitigates a cost that other collectors don’t have to pay. So what does eager marking over these coarse 128-byte regions buy Immix?
Firstly, eager sweeping buys you eager identification of empty blocks. If your large object space needs to steal blocks from the mark space, but the mark space doesn’t have enough empties, it can just trigger collection and then it knows if enough blocks are available. If no blocks are available, you can grow the heap or signal out-of-memory. If the lospace (large object space) runs out of blocks before the mark space has used all recyclable blocks, that’s no problem: evacuation can move the survivors of fragmented blocks into these recyclable blocks, which have also already been identified by the eager coarse sweep.
Without eager empty block identification, if the lospace runs out of blocks, firstly you don’t know how many empty blocks the mark space has. Sweeping is a kind of wavefront that moves through the whole heap; empty blocks behind the wavefront will be identified, but those ahead of the wavefront will not. Such a lospace allocation would then have to either wait for a concurrent sweeper to advance, or perform some lazy sweeping work. The expected latency of a lospace allocation would thus be higher, without eager identification of empty blocks.
Secondly, eager sweeping might reduce allocation overhead for mutators. If allocation just has to identify holes and not compute information or decide on what to do with a block, maybe it go brr? Not sure.
lines, lines, lines
The original Immix paper also notes a relative insensitivity of the collector to line size: 64 or 256 bytes could have worked just as well. This was a somewhat surprising result to me but I think I didn’t appreciate all the roles that lines play in Immix.
Obviously line size affect the worst-case fragmentation, though this is mitigated by evacuation (which evacuates objects, not lines). This I got from the paper. In this case, smaller lines are better.
Line size affects allocation-time overhead for mutators, though which way I don’t know: scanning for holes will be easier with fewer lines in a block, but smaller lines would contain more free space and thus result in fewer collections. I can only imagine though that with smaller line sizes, average hole size would decrease and thus medium-sized allocations would be harder to service. Something of a wash, perhaps.
However if we ask ourselves the thought experiment, why not just have 16-byte lines? How crazy would that be? I think the impediment to having such a precise line size would mainly be Immix’s eager sweep, as a fine-grained traversal of the heap would process much more data and incur possibly-unacceptable pause time overheads. But, in such a design you would do away with some other downsides of coarse-grained lines: a side table of mark bytes would make the line mark table redundant, and you eliminate much possible “dark matter” hidden by internal fragmentation in lines. You’d need to defer sweeping. But then you lose eager identification of empty blocks, and perhaps also the ability to evacuate into recyclable blocks. What would such a system look like?
Readers that have gotten this far will be pleased to hear that I have made some investigations in this area. But, this post is already long, so let’s revisit this in another dispatch. Until then, happy allocations in all regions.
NetBSD 9.3 released
The Apache News Round-up: week ending 5 August 2022
Welcome, August –we’re opening the month with another great week. Here’s what the Apache community has been up to:
ApacheCon™ – the ASF’s official global conference series, bringing Tomorrow’s Technology Today since 1998.
- Registrations are open for ApacheCon North America, 2022 https://www.apachecon.com/acna2022/register.html
ASF Board – management and oversight of the business affairs of the corporation in accordance with the Foundation’s bylaws.
- Next Board Meeting: 17 August 2022. Running Board calendar and minutes are available.
ASF Infrastructure – our distributed team on three continents keeps the ASF’s infrastructure running around the clock.
- 7M+
weekly checks yield uptime at 100.00%. Performance checks across 50
different service components spread over more than 250 machines in data
centers around the world. View the ASF’s Infrastructure Uptime site to see the most recent averages.
Apache Code Snapshot
– Over the past week, 244 Apache Committers and 725 contributors
changed 7,148,073 lines of code over 2,945 commits. Top five
contributors, in order, are: Claus Ibsen, Dan Haywood, Jinrui Zhang,
Andi Huber, and Mark Thomas.
Apache Project Announcements – the latest updates by category.
Big Data –
- Apache NiFi 1.17.0 released
- Apache Arrow 8.0.0 and 9.0.0 released
Cloud Computing –
- Apache Kafka 3.2.1 released
Content –
- Apache JSPWiki 2.11.3 released
- CVE-2022-27166: XSS vulnerability on XHRHtml2Markup.jsp in JSPWiki 2.11.2
- CVE-2022-28730: Cross-site scripting vulnerability on AJAXPreview.jsp
- CVE-2022-28731: CSRF in UserPreferences.jsp
- CVE-2022-28732: Cross-site scripting vulnerability on WeblogPlugin
- CVE-2022-34158: User Group Privilege Escalation
Databases –
- Apache Kvrocks (Incubating) 2.1.0 released
Middleware –
- Apache Linkis 1.1.3 (Incubating) released
Apache Community Notices
-
Apache in 2021 – By The Digits + Video highlights
-
The Apache Way to Sustainable Open Source Success
-
Presentations from 2021’s ApacheCon Asia and ApacheCon@Home are available on the ASF YouTube channel.
-
“Success at Apache” focuses on the people and processes behind why the ASF “just works.”
-
Follow the ASF on social media: @TheASF on Twitter and The ASF page LinkedIn.
-
Follow the Apache Community on Facebook and Twitter.
-
Are your software solutions Powered by Apache? Download & use our “Powered By” logos.
Stay updated about The ASF
For real-time updates, sign up for Apache-related news by sending mail to announce-subscribe@apache.org and follow @TheASF on Twitter. For a broader spectrum from the Apache community, Planet Apache provides an aggregate of Project activities as well as the personal blogs and tweets of select ASF Committers.
Have an item? Contact us!
We
try to catch all the major announcements and goings on at The ASF, but
we’re not all-knowing. Have an item you want to see in the weekly
round-up? Send it to press@apache.org.
Linux tool alternatives, configuring firewalls, and more sysadmin tips
Check out Enable Sysadmin’s top 10 articles from July 2022. Read More at Enable Sysadmin
The post Linux tool alternatives, configuring firewalls, and more sysadmin tips appeared first on Linux.com.
GNUnet News: GNUnet 0.17.3
GNUnet 0.17.3
This is a bugfix release for gnunet 0.17.2.
In addition to the fixed in the source, the documentation websites including
the handbook have been updated and consolidated:
https://docs.gnunet.org
.
Notably, the GNUnet project now publishes a GNS zone for its websites which can be used
to test resolution on any installation.
For example:
$ gnunet-gns -t ANY -u www.gnunet.org
Download links
-
http://ftpmirror.gnu.org/gnunet/gnunet-0.17.3.tar.gz
-
http://ftpmirror.gnu.org/gnunet/gnunet-0.17.3.tar.gz.sig
The GPG key used to sign is:
3D11063C10F98D14BD24D1470B0998EF86F59B6A
Note that due to mirror synchronization, not all links may be functional
early after the release. For direct access try
http://ftp.gnu.org/gnu/gnunet/
Noteworthy changes in 0.17.3 (since 0.17.2)
-
DHT
: Various bugfixes in the protocol. -
TRANSPORT
: Fix HTTPS tests.
#7257
-
DOCUMENTATION
:- Migrate from texinfo to sphinx.
- Dropped dependency on texinfo.
- Added dependency on sphinx.
A detailed list of changes can be found in the
ChangeLog
and
the
bugtracker
.
Chrome 105 Beta: Custom Highlighting, Fetch Upload Streaming, and More
Unless otherwise noted, changes described below apply to the newest Chrome beta channel release for Android, Chrome OS, Linux, macOS, and Windows. Learn more about the features listed here through the provided links or from the list on ChromeStatus.com. Chrome 105 is beta as of DATE. You can download the latest on Google.com for desktop or on Google Play Store on Android.
Custom Highlight API
The Custom Highlight API extends the concept of highlighting pseudo-elements by providing a way to style the text of arbitrary ranges, rather than being limited to the user agent-defined ::selection
, ::inactive-selection
, ::spelling-error
, and ::grammar-error
. This is useful in a variety of scenarios, including editing frameworks that wish to implement their own selection, find-in-page over virtualized documents, multiple selection to represent online collaboration, or spell checking frameworks.
Without this feature, web developers and framework authors are forced to modify the underlying structure of the DOM tree to achieve the rendering they desire. This is complicated in cases where the desired highlight spans across multiple subtrees, and it also requires DOM updates to adjust highlights as they change. The custom highlight API provides a programmatic way of adding and removing highlights that does not affect the underlying DOM structure, but instead applies styles to text based on range objects.
In 105, only the color and background-color pseudo elements are supported. Support for other items will be added later.
Container Queries
Container queries allow authors to style elements according to the size of a container element. This capability means that a component owns its responsive styling logic. This makes the component much more resilient, as the styling logic is attached to it, no matter where it appears on the page.
Container queries are similar to media queries, but evaluate against the size of a container rather than the size of the viewport. A known issue is that container queries do not work when an author combines it with table layout in a multicolumn layout. We expect to fix this in 106. For more information, see @container and :has(): two powerful new responsive APIs. For other CSS features in this release, see below.
:has() Pseudo Class
The :has()
pseudo class specifies an element having at least one element that matches the relative selector passed as an argument. Unlike other selectors, the :has()
pseudo class applies, for a specified element, a style rule to preceding elements, specifically, siblings, ancestors, and preceding siblings of ancestors. For example, the following rule matches only anchor tags that have an image tag as a child.
a:has(> img)
For more information, see @container and :has(): two powerful new responsive APIs. For other CSS features in this release, see below.
Fetch Upload Streaming
Fetch upload streaming lets web developers make a fetch with a ReadableStream
body. Previously, you could only start a request once you had the whole body ready to go. But now, you can start sending data while you’re still generating the content, improving performance and memory usage.
For example, an online form could initiate a fetch as soon as a user focuses a text input field. By the time the user clicks enter, fetch()
headers would already have been sent. This feature also allows you to send content as it’s generated on the client, such as audio and video. For more information, see Streaming requests with the fetch API.
Window Controls Overlay for Installed Desktop Web Apps
Window controls overlay extends an app’s client area to cover the entire window, including the title bar, and the window control buttons (close, maximize/restore, minimize). The web app developer is responsible for drawing and input handling for the entire window except for the window controls overlay. Developers can use this feature to make their installed desktop web apps look like operating system apps. For more information, see Customize the window controls overlay of your PWA’s title bar.
Origin Trials
No origin trials are beginning in this version of Chrome. However there are a number of ongoing origin trials which you can find on the Chrome Origin Trials dashboard. Origin trials allow you to try new features and give feedback on usability, practicality, and effectiveness to the web standards community. To learn more about origin trials in Chrome, visit the Origin Trials Guide for Web Developers. Microsoft Edge runs its own origin trials separate from Chrome. To learn more, see the Microsoft Edge Origin Trials Developer Console.
Completed Origin Trials
The following features, previously in a Chrome origin trial, are now enabled by default.
Media Source Extensions in Workers
The Media Source Extensions (MSE) API is now available from DedicatedWorker
contexts to enable improved performance of buffering media for playback by an HTMLMediaElement
on the main Window context. By creating a MediaSource
object in a DedicatedWorker
, an application may then obtain a MediaSourceHandle
from it and call postMessage()
to send it to the main thread for attaching to an HTMLMediaElement
. The context that created the MediaSource
object may then use it to buffer media.
Viewport-height Client Hint
Chrome supports the new Sec-CH-Viewport-Height
client hint. This is a counterpart to the Sec-CH-Viewport-Width
previously introduced in Chrome. Together they provide information about a viewport’s size to an origin. To use these hints, pass Sec-CH-Viewport-Height
or Sec-CH-Viewport-Width
to the Accept-CH
header.
Other Features in this Release
Accurate Screen Labels for Multi-Screen Window Placement
This release enhances the screen label strings provided by the Multi-Screen Window Placement API. Specifically, it refines ScreenDetailed.label
property by replacing the previous placeholders with information from the device’s Extended Display Identification Data (EDID) or from a higher-level operating system API. For example, instead of returning “External Display 1”, the label property will now return something like “HP z27n” or “Built-in Retina Display”. These more accurate labels match those shown by operating systems in display settings dialog boxes. The labels are only exposed to sites that have been granted the "window-placement"
permission by the user.
CSS: Preventing Overscroll Effects for Fixed Elements
Setting an element’s position
CSS property to fixed
(unless the element’s containing block is not the root) will now prevent it from performing the effects specified by overscroll-behavior. In particular, fixed-position
elements will not move during overscroll effects.
DisplayMediaStreamConstraints.systemAudio
A new constraint is being added to MediaDevices.getDisplayMedia()
to indicate whether system audio should be offered to the user. User agents sometimes offer audio for capturing alongside video. But not all audio is created alike. Consider video-conferencing applications. Tab audio is often useful, and can be shared with remote participants. But system audio includes participants’ own audio, and may not be appropriate to share back. To use the new constraint, pass systemAudio
as a constraint. For example:
const stream = await navigator.mediaDevices.getDisplayMedia({
video: true,
audio: true,
systemAudio: "exclude" // or "include"
});
This feature is only supported on desktop.
Expose TransformStreamDefaultController
To conform with spec the TransformStreamDefaultController
class is now available on the global scope. This class already exists and can be accessed using code such as
let TransformStreamDefaultController;
new TransformStream({ start(c) { TransformStreamDefaultController = c.constructor; } });
This change makes such code unnecessary since TransformStreamDefaultController
is now on the global scope. Possible uses for this class include monkey patching properties onto TransformStreamDefaultController.prototype
, or feature-testing existing properties of it more easily. Note that the class is not constructible. In other words, this throws an error:
new TransformStreamDefaultController()
HTML Sanitizer API
The HTML Sanitizer API is an easy to use and safe way to remove executable code from arbitrary, user-supplied content. The goal of the API is to make it easier to build web applications that are free of cross-site scripting vulnerabilities and ship part of the maintenance burden for such apps to the platform.
In this release, only basic functionality is supported, specifically Element.setHTML()
. The Sanitize interface will be added at a later stage. Namespaced content (SVG + MathML) is not yet supported, only HTML. For more information on the API, see HTML Sanitizer API – Web APIs.
import.meta.resolve()
The import.meta.resolve()
method returns the URL to which the passed specifier would resolve in the context of the current script. That is, it returns the URL that would be imported if you called import()
. A specifier is a URL beginning with a valid scheme or one of /
, ./
, or ../
. See the HTML spec for examples.
This method makes it easier to write scripts which are not sensitive to their exact location, or to the web application’s module setup. Some of its capabilities are doable today, in a longer form, by combining new URL()
and the existing import.meta.url()
method. But the integration with import maps allows resolving URLs that are affected by import maps.
Improvements to the Navigation API
Chrome 105 introduces two new methods on the NavigateEvent of the Navigation API (introduced in 102) to improve on methods that have proved problematic in practice. intercept()
, which let’s developers control the state following the navigation, replaces transitionWhile()
, which proved difficult to use. The scroll()
method, which scrolls to an anchor specified in the URL, replaces restoreScroll()
which does not work for all types of navigation. For explanations of the problems with the existing methods and examples of using the new, see Changes to NavigateEvent.
The transitionWhile()
and restoreScroll()
methods are also deprecated in this release. We expect to remove them in 108. See below for other deprecations and removals in this release.
onbeforeinput Global Event Handler Content Attribute
The nbeforeinput
global content attribute is now supported in Chrome. The beforeinput
form was already available via addEventListener()
. Chrome now also allows feature detection by testing against document.documentElement.onbeforeinput
Opaque Response Blocking v0.1
Opaque Response Blocking (ORB) is a replacement for Cross-Origin Read Blocking (CORB). CORB and ORB are both heuristics that attempt to prevent cross-origin disclosure of “no-cors” subresources.
Picture-in-Picture API Comes to Android
The Picture-in-Picture API allows websites to create a floating video window that is always on top of other windows so that users may continue consuming media while they interact with other sites or applications on their device. This feature has been available on desktop since Chrome 70. It’s now available for Chrome running on Android 11 or later. This change only applies to <video>
elements. For information on using the Picture-in-Picture API, see Watch video using Picture-in-Picture.
Response.json()
The Response()
constructor allows for creating the body of the response from many types; however the existing response.json()
instance method does not let you directly create a JSON object. The Response.json()
static method fills this gap.
Response.json() returns a new Response object and takes two arguments. The first argument takes a string to convert to JSON. The second is an optional initialization object.
Syntax Changes to Markup Based Client Hints Delegation
The syntax for the delegation of client hints to third-party content that requires client information lost by user agent reduction, which shipped in Chrome 100, is changing.
Previous syntax:<meta name="accept-ch" value="sec-ch-dpr=(https://foo.bar https://baz.qux), sec-ch-width=(https://foo.bar)">
New syntax:<meta http-equiv="delegate-ch" value="sec-ch-dpr https://foo.bar https://baz.qux; sec-ch-width https://foo.bar">
Writable Directory Prompts for the File System Access API
Chromium now allows returning a directory with both read and write permissions in a single prompt for the File System Access API. Previously, Window.showDirectoryPicker()
returned a read-only directory (after showing a read access prompt), requiring a second prompt to get write access. This double prompt is a poor user experience and contributes to confusion and permission fatigue among users.
Deprecations, and Removals
This version of Chrome introduces the deprecations and removals listed below. Visit ChromeStatus.com for lists of planned deprecations, current deprecations and previous removals.
Remove WebSQL in Non-secure Contexts
WebSQL in non-secure contexts is now removed. The Web SQL Database standard was first proposed in April 2009 and abandoned in November 2010. Gecko never implemented this feature and WebKit deprecated it in 2019. The W3C encourages Web Storage and Indexed Database for those needing alternatives.
Developers should expect that WebSQL itself will be deprecated and removed when usage is low enough.
CSS Default Keyword is Disallowed in Custom Identifiers
The CSS keyword 'default'
is no longer allowed within CSS custom identifiers, which are used for many types of user-defined names in CSS (for example, names created by @keyframes
rules, counters, @container
names, custom layout or paint names). This adds 'default'
to the list of names that are restricted from use in custom identifiers, specifically 'inherit'
, 'initial'
, 'unset'
, 'revert'
, and 'revert-layer'
.
Deprecations in the Navigation API
The transitionWhile()
and restoreScroll()
methods are also deprecated in this release, and we expect to remove them in 108. Developers who need this functionality should use the new intercept()
and scroll()
methods. For explanations of the problems with the existing methods and examples of using the new, see Changes to NavigateEvent .
Deprecate Non-ASCII Characters in Cookie Domain Attributes
To align with the latest spec (RFC 6265bis), Chromium will soon reject cookies with a Domain
attribute that contains a non-ASCII character (for example, Domain=éxample.com
).
Support for IDN domain attributes in cookies has been long unspecified, with Chromium, Safari, and Firefox all behaving differently. This change standardizes Firefox’s behavior of rejecting cookies with non-ASCII domain attributes.
Since Chromium has previously accepted non-ASCII characters and tried to convert them to normalized punycode for storage, we will now apply stricter rules and require valid ASCII (punycode if applicable) domain attributes.
A warning is printed to the console starting in 105. Removal is expected in 106.
Remove Gesture Scroll DOM Events
The gesture scroll DOM events have been removed from Chrome, specifically, gesturescrollstart
, gesturescrollupdate
and gesturescrollend
. These were non-standard APIs that were added to Blink for use in plugins, but had also been exposed to the web.
Welcome to Deep Dive AI
With AI systems being so complex, concepts like “program” or “source code” in the Open Source Definition are challenged in new and surprising ways.
The post Welcome to Deep Dive AI first appeared on Voices of Open Source.