Andy Wingo: ephemerons and finalizers

Good day, hackfolk. Today we continue the series on garbage collection
with some notes on ephemerons and finalizers.

conjunctions and disjunctions

First described in a 1997 paper by Barry
Hayes
, which
attributes the invention to George Bosworth, ephemerons are a kind of
weak key-value association.

Thinking about the problem abstractly, consider that the garbage
collector’s job is to keep live objects and recycle memory for dead
objects, making that memory available for future allocations. Formally
speaking, we can say:

  • An object is live if it is in the root set

  • An object is live it is referenced by any live object.

This circular definition uses the word any, indicating a disjunction:
a single incoming reference from a live object is sufficient to mark a
referent object as live.

Ephemerons augment this definition with a conjunction:

  • An object V is live if, for an ephemeron E containing an
    association betweeen objects K and V, both E and K are live.

This is a more annoying property for a garbage collector to track. If
you happen to mark K as live and then you mark E as live, then you
can just continue to trace V. But if you see E first and then you
mark K, you don’t really have a direct edge to V. (Indeed this is
one of the main purposes for ephemerons: associating data with an
object, here K, without actually modifying that object.)

During a trace of the object graph, you can know if an object is
definitely alive by checking if it was visited already, but if it wasn’t
visited yet that doesn’t mean it’s not live: we might just have not
gotten to it yet. Therefore one common implementation strategy is to
wait until tracing the object graph is done before tracing ephemerons.
But then we have another annoying problem, which is that tracing
ephemerons can result in finding more live ephemerons, requiring another
tracing cycle, and so on. Mozilla’s Steve Fink wrote a nice article on
this
issue

earlier this year, with some mitigations.

finalizers aren’t quite ephemerons

All that is by way of introduction. If you just have an object graph
with strong references and ephemerons, our definitions are clear and
consistent. However, if we add some more features, we muddy the waters.

Consider finalizers. The basic idea is that you can attach one or a
number of finalizers to an object, and that when the object becomes
unreachable (not live), the system will invoke a function. One way to
imagine this is a global association from finalizable object O to
finalizer F.

As it is, this definition is underspecified in a few ways. One, what
happens if F references O? It could be a GC-managed closure, after
all. Would that prevent O from being collected?

Ephemerons solve this problem, in a way; we could trace the table of
finalizers like a table of ephemerons. In that way F would only be
traced if O is live already, so that by itself it wouldn’t keep O
alive. But then if O becomes dead, you’d want to invoke F, so you’d
need it to be live, so reachability of finalizers is not quite the same
as ephemeron-reachability: indeed logically all F values in the
finalizer table are live, because they all will be invoked at some
point.

In the end, if F references O, then F actually keeps O alive.
Whether this prevents O from being finalized depends on our definition
for finalizability. We could say that an object is finalizable if it is
found to be unreachable after a full trace, and the finalizers F are
in the root set. Or we could say that an object is finalizable if it is
unreachable after a partial trace, in which finalizers are not
themselves in the initial root set, and instead we trace them after
determining the finalizable set.

Having finalizers in the initial root set is unfortunate: there’s no
quick check you can make when adding a finalizer to signal this problem
to the user, and it’s very hard to convey to a user exactly how it is
that an object is referenced. You’d have to add lots of gnarly
documentation on top of the already
unavoidable
gnarliness
that you already had to write. But, perhaps it is a local maximum.

Incidentally, you might think that you can get around these issues by
saying “don’t reference objects from their finalizers”, and that’s true
in a way. However it’s not uncommon for finalizers to receive the
object being finalized as an argument; after all, it’s that object which
probably encapsulates the information necessary for its finalization.
Of course this can lead to the finalizer prolonging the longevity of an
object, perhaps by storing it to a shared data structure. This is a
risk for correct program construction (the finalized object might
reference live-but-already-finalized
objects
),
but not really a burden for the garbage collector, except in that it’s a
serialization point in the collection algorithm: you trace, you compute
the finalizable set, then you have to trace the finalizables again.

ephemerons vs finalizers

The gnarliness continues! Imagine that O is associated with a
finalizer F, and also, via ephemeron E, some auxiliary data V.
Imagine that at the end of the trace, O is unreachable and so will be
dead. Imagine that F receives O as an argument, and that F looks
up the association for O in E. Is the association to V still
there?

Guile’s
documentation
on guardians, a
finalization-like facility, specifies that weak associations
(i.e. ephemerons) remain in place when an object becomes collectable,
though I think in practice this has been broken since Guile switched to
the BDW-GC collector some 20 years ago or so and I would like to fix it.

One nice solution falls out if you prohibit resuscitation by not
including finalizer closures in the root set and not passing the
finalizable object to the finalizer function. In that way you will
never be able to look up E×OV, because you don’t have O. This
is the path that JavaScript has taken, for example, with
WeakMap
and
FinalizationRegistry.

However if you allow for resuscitation, for example by passing
finalizable objects as an argument to finalizers, I am not sure that
there is an optimal answer. Recall that with resuscitation, the trace
proceeds in three phases: first trace the graph, then compute and
enqueue the finalizables, then trace the finalizables. When do you
perform the conjunction for the ephemeron trace? You could do so after
the initial trace, which might augment the live set, protecting some
objects from finalization, but possibly missing ephemeron associations
added in the later trace of finalizable objects. Or you could trace
ephemerons at the very end, preserving all associations for finalizable
objects (and their referents), which would allow more objects to be
finalized at the same time.

Probably if you trace ephemerons early you will also want to trace them
later, as you would do so because you think ephemeron associations are
important, as you want them to prevent objects from being finalized, and
it would be weird if they were not present for finalizable objects.
This adds more serialization to the trace algorithm, though:

  1. (Add finalizers to the root set?)

  2. Trace from the roots

  3. Trace ephemerons?

  4. Compute finalizables

  5. Trace finalizables (and finalizer closures if not done in 1)

  6. Trace ephemerons again?

These last few paragraphs are the reason for today’s post. It’s not
clear to me that there is an optimal way to compose ephemerons and
finalizers in the presence of resuscitation. If you add finalizers to
the root set, you might prevent objects from being collected. If you
defer them until later, you lose the optimization that you can skip
steps 5 and 6 if there are no finalizables. If you trace
(not-yet-visited) ephemerons twice, that’s overhead; if you trace them
only once, the user could get what they perceive as premature
finalization of otherwise reachable objects.

In Guile I think I am going to try to add finalizers to the root set,
pass the finalizable to the finalizer as an argument, and trace
ephemerons twice if there are finalizable objects. I think this wil
minimize incoming bug reports. I am bummed though that I can’t
eliminate them by construction.

Until next time, happy hacking!

If one GUI’s not enough for your SPARC workstation, try four

This is a 1990 Solbourne Computer S3000 all-in-one workstation based around the 33MHz Panasonic MN10501, irreverently code-named the Kick-Ass Processor or KAP. It is slightly faster than, and the S3000 and the related S4000 and later S4000DX/S4100 directly competed with, the original gangsta 1989 Sun SPARCstation and SPARCstation 1+. Solbourne was an early SPARC innovator through majority owner Matsushita, who was a SPARC licensee in competition with Fujitsu, and actually were the first to introduce multiprocessing to the SPARC ecosystem years before Sun themselves did. To do this and maintain compatibility, Solbourne licensed SunOS 4.x from Sun and rebadged it as OS/MP with support for SMP as well as their custom MMU and fixes for various irregularities in KAP, which due to those bugs was effectively limited to uniprocessor implementations. Their larger SMP systems used Fujitsu (ironically), Weitek and Texas Instruments CPUs; I have a Series5 chassis and a whole bunch of KBus cards Al Kossow gave me that I’ve got to assemble into a working system one of these days. And it turns out that particular computing environment was really the intersection point for a lot of early GUI efforts, which were built and run on Sun workstations and thus will also run on the Solbourne. With some thought, deft juggling of PATH and LD_LIBRARY_PATH and a little bit of shell scripting, it’s possible to create a single system that can run a whole bunch of them. That’s exactly what reykjavik, this S3000, will be doing. This is by far the coolest thing I’ve read and learned about in a long, long time. This is an amazing source of information and collection of screenshots and explanations.

Author Claire Ahn Wants THIS Actor To Be In ‘I Guess I Live Here Now’ Movie | Open Book | Seventeen

For all of you that LOVE books, you’re going to LOVE this. We sat down with Claire Ahn, author of ‘I Guess I Live Here Now’. She’s answering YOUR questions and helping us to kick off Seventeen’s new bookclub . Watch along as she reveals the inspiration for her characters, her experiences growing up in Seoul, and her FAVE Korean foods (that made appearances in her book)! More AMAZING reads and author interviews coming your way soon

Install Xampp and WordPress in Linux

এই ভিডিওতে আমরা জানতে পারবো কিভাবে লিন্যাক্স এ #xampp এবং #wordpress ইনস্টল করা যায়।

#Techtd is a YouTube channel & Facebook Page dedicated to creating the best Adobe Photoshop and Illustrator, Linux Os, Web Design and Development also open-source software tutorials like Gimp, Inkscape, Libra Office, etc. Our goal is to create the best, most informative, and most entertaining tutorials on the web. If you enjoy our videos then don’t forget to subscribe to this Channel: https://www.youtube.com/techtd

Visit our Blog site to know more:
https://techtdbangla.com/

Follow Us on:
Facebook- https://www.facebook.com/techtdbangla
Instagram- https://www.instagram.com/techtdbangla/
Twitter- https://twitter.com/techtd_bangla
Linkedin- https://www.linkedin.com/company/techtd/
Dailymotion- https://www.dailymotion.com/techtd

If you want our support at any time, then join our telegram channel:
https://t.me/techtd

Also, you can join our Facebook group:
https://www.facebook.com/groups/techtd/

Welcome to Showcase Shorts: A new way to keep you updated!

by Marie Achour.  

Hello Community Members,

The Moodle Products Team is delighted to welcome you to our very first Showcase Shorts!!

We’ve received feedback telling us you’d like more insight into the work we do at Moodle HQ and our progress on the development of the Moodle platforms managed by our teams.

Our Products team at Moodle work in 3-week sprints, and at the end of every sprint cycle, we hold a Showcase during which our teams share what they have achieved during the sprint period.

To make it easier for you to stay connected to our work at Moodle HQ, we want to share the highlights of our Sprint Showcase with you from this point forward!

This sprint was not a normal 3-week sprint as it included a pause for many of our team members who attended MoodleMoot Global 2022 in Barcelona at the end of September.

Despite this, we achieved a lot, including:

We also progressed product design and development for the Moodle LMS 4.1 and 4.2 releases.

You can see all the details, as well as updates from our MoodleCloud & Mobile Apps teams, by watching these ‘Showcase Shorts’:

  • Get an update on our progress with MDL-75071 which will see us deprecate Atto in favour of Tiny 6. 

  • See a demonstration of changes being made to our Grade User Report and Grade Single View Report as part of our efforts to improve the user experience in Gradebook as part of MDL-74953

  • Find out about the latest updates to our Database activity as part of MDL-75059, a project proudly supported by the MUA.

  • See some of the changes we’ve implemented to get better data about our MoodleCloud customers to improve our product positioning.

  • Watch a summary of all the things that are being delivered for Moodle App Workplace 4.0 and a new feature coming on all our Moodle Apps.

  • Get an insiders view of the progress of our integrations.

And last but certainly not least, you can get an overview of some of the big things the Moodle Community team have been working on, including data insights from MoodleMoot Global and a sneak peek at the mobile version of our redesigned moodle.org due for launch very very soon!

We truly hope you enjoy this update, and, as always, we’d love your feedback; so please add comments to the MDL issues in the tracker or post your thoughts in our forums.

Until next sprint!
The Moodle Products Team

Event Organizers: Camp Debrief: Colorado WebCamp 2022

This is the second in a series of “Camp Debriefs” by the Drupal Event Organizer Working Group. In this debrief, Fei Lauren (feilauren) interviews Matthew Saunders (MatthewS)  about DrupalCamp Colorado 2022. If you would like your Drupal event to be featured in a Camp Debrief, contact the EOWG.

How did you learn about Drupal, and what drew you in?

Matthew had already established a career in tech when he stumbled into the Drupal community over 15 years ago. He was working in the non-profit sector building custom PHP and MySQL solutions. In an effort to branch out and consider the potential for new tooling, he was asked to organize a Think Tank involving a variety of tech professionals and areas of expertise. 

“We all just got together to talk about where this crazy thing The Internet was headed.”

Enter Drupal, stage left.

“I sat there sort of in amazement as they went through Drupal, 4.5.1 or 4.5.2… I thought to myself looking at this, wow, I never have to write another authentication system ever again!”

Not long after that, Matthew was attending the first DrupalCon in Barcelona. He was able to meet and share ideas with Dries, Moshe and some of the other folks who are known as the “Elder Statesmen”, so to speak. Drupal has been a hub of his career ever since.

What are you most proud of? 

Two things. Well, technically three. 

 Matthew was on the team that moved Examiner.com from ColdFusion to the soon to be released Drupal 7. At the time, it was the largest Drupal 7 migration in the history of Drupal. 

“I think we ended up writing about 18% of Drupal 7”

Matthew also sat on the Drupal Association Board of Directors for about two and a half years. But there is another important accomplishment in his career that he is also extremely proud of – the reason we are all here, reading this blog – his work in establishing DrupalCamp Colorado.

How was DrupalCamp Colorado born? 

Originally, it was never meant to be such a large event. It wasn’t even meant to be a web camp. It was a meetup hosted by an agency in Boulder. 

“It was a bunch of nerds sitting around a table showing each other things that they’d worked on and you know, having a couple of cold adult beverages and eating pizza.”

Four or five months later, the casual meetup turned into a very small web camp with only about 20 attendees. The camp grew from there until 2011. DrupalCamp Colorado’s organizers were also part of the organizing committee for DrupalCon, so when Denver was announced as the host to DrupalCon 2012, a lot of people tuned in. That year there were about 600 attendees at DrupalCamp Colorado. The following year, DrupalCon Colorado had about 1400 – and it was the last community-driven DrupalCon. Professional consultants have been involved ever since.
 
Building momentum like that is no small feat, how did it happen so quickly?

20 attendees to 600 is no small feat, but it didn’t happen overnight. 

“It was 6 or 7 years to get up to that point. And fortunately… many of us were working for Examiner at that point, so we had some pretty remarkable resources behind us. And a whole lot of support in people like our CTO, Michael Meyers.” 

Matthew also mentions how many organizers were putting in full time hours in order to hit deadlines successfully. I think an important takeaway for new organizers is that a successful event with 20 attendees is still a successful event.

What would you say are the most impactful things that we, as newer organizers, could be doing to help breathe life into our initiatives, into our camps, into our events?

“A big part of it is early planning.”

Especially when working to establish a new camp, many volunteers and attendees will likely learn about it by word of mouth. Leaving lots of time for people to plan and promote is really important. 

Finding a venue is also challenging, but again, working with potential venues is much easier to do when there is a lot of lead time. You are much more likely to get something inexpensively or even free if you give the venue lots of time to plan. Try reaching out to a charter school, Matthew suggests, as most of what they do is already community driven. But schools in general have fewer rooms occupied in the summer, and all the equipment a camp might need is already in place. 

What are some of the things that were initially a struggle, but are now not an issue at all? 

There are so many resources that exist on drupal.org – use a recipe!

Hearing this answer, it certainly doesn’t seem like rocket science. But just like following a recipe in a kitchen, the first time can still be challenging – that’s okay. A recipe still gives you a significant boost. Don’t try and reinvent wheels if you don’t have to!  
Once you get used to following a recipe that works for your event, “you don’t even have to think about it at the end of it. You’ve got a camp that is just gonna run itself because you’ve broken it up into bite size chunks”. 

Okay so, maybe it doesn’t really run itself but it’s difficult to argue the benefit of knowing the steps and making sure you have all the ingredients lined up in advance.

For you, what is the best part about organizing? 

Matthew offers two things. 

“We always have a really good party at the end, and it’s sort of a it’s sort of a release for all of the for all of the organizers” 

And the second is a bit more of a ubiquitous reply among organizers – “the camp is a way of giving back”. 

2022: The great hybrid challenge – tell us more about how this went.

Hopin,  Zoom, StreamYard – there are so many tools out there, what has worked for you?
“Don’t try to do it with Zoom or something like that. Make sure that you’re using a tool like Hopin”. 

Hopin is expensive at a glance, but there are things that can be done to bring that cost down. As of now, they sell annual subscriptions which can be quite costly. But you can pay monthly and pause your subscription. There may also be potential to collaborate with other Drupal web camps and events to lower the impact on a single event (as long as there is a single admin coordinating). 

Even if you do pay for an entire year (somewhere in the range of $1000), they also provide a lot of features out of the box – including the ability to offer virtual spaces for any of your potential sponsors. Such as any that might be willing to cover the cost of Hopin. 

Additionally, Hopin provides other features that really add up. A “ready made” structure for virtual events out of the box with automatic recordings for sessions means less work before and after the event. The friendly scheduling interface for users and presenters means everyone knows where they need to be while the event is running. Registrations are easier too, they even offer payment handling. 

If you have ever organized a camp or any large event, you know how much work it is. So this next piece of advice is important. Matthew warns, “Expect to have twice as much work. You’re doing two events. We used Hopin, but we also had our physical venue.”. It’s twice as much work but you don’t have to schedule full days – they tried out half days it helped a bit, he reports. 

But is it worth it? I guess that depends on resources, but if the option is within reach it certainly seems so.

“We actually ended up with some amazing content last year and it’s because our participants ranged from all over the world. We ended up with several people from India who presented. We ended up with lots of local people. We ended up with people from all over the United States, Canada, all over Europe.”

Attendee engagement. One of the most ubiquitous challenges of hybrid events. What can you share with us about how you have addressed this? 

When we talk about hybrid challenges, many of us might imagine trying to solve problems that help restore engagement to where it was previously. Matthew’s insights really illustrate how important it is to consider our definitions of “inclusiveness”.

“I would not have been able to attend. I can’t attend events. I am really glad you did this.” 
Resident of Alaska

With people pouring in from all over the world, it’s not just a matter of keeping the doors open for those who live in and around Colorado. Hybrid events provide an opportunity for inclusiveness that can’t otherwise exist. With 12 attendees who were in India including two speakers, the question Matthew and many other organizers seem to be asking is not should our events continue to be hybrid or remote. It’s how do we keep improving the hybrid model, and what other advantages exist?