Uploading a Video to Internet Archive

I make no secret of the fact that OSCON is one of my favorite conferences. I try to speak at it whenever I can and do what I can to support its community. When I do get the opportunity to speak there, it's one of the most seamless experiences a tech conference speaker could ever hope to have.

Part of that seamlessness includes a speaker agreement which very clearly sets forth the expectations and responsibilities both of the presenter and of O'Reilly Media. My favorite part of that agreement is this clause:

ORM speaker agreement

What that says is that while I agree that O'Reilly Media has the right to sell its own access to the video of my presentation, I retain the right to distribute it for free as I see fit. This is a marvelous clause. It shows a deep respect for the speakers and their investment of time, effort, and sometimes financial expense to create and present the content at the conference. It's great that O'Reilly Media allows the speakers this freedom to distribute their content and is in line with the free/open/libre principles with which O'Reilly has become associated over their many years.

I typically download the videos of my OSCON presentations and make them available on (where else?) Internet Archive. This post will detail how other speakers can do the same. Most of this process will work for most any video to which you have the rights. That last piece is key. Do not upload videos of other speakers' presentations unless they have given you permission to do so.

Download your video

Naturally, before you can upload your OSCON video to Internet Archive you must first download it from O'Reilly. Duh.

One of the perks of being an OSCON speaker is access to the complete vault of that year's OSCON videos. This is a treasure trove of information presented by world-class speakers and technologists. If you haven't checked it out yet, I highly recommend setting aside a few hours to lose yourself in it and fill your brain.

All of these videos are available for download for subscribers to the vault and are entirely DRM-free, just like O'Reilly's books. Feel free to download all of them for your offline use, but please do the right thing and not distribute the ones to which you have no distribution rights.

To download your video, log into your O'Reilly account and navigate to your library. The OSCON video vaults to which you have access will be listed there.

If you have an ad-blocker enabled on your browser, you may need to disable it for the next step otherwise the necessary UI elements won't appear.

When viewing the vault for that year's OSCON, the videos are organized by track. If you don't remember which track your talk was in (I never remember this), you can just pop open all of the track accordions and use your browser's search function to find your name in the list.

To the right of your talk is a nice little download button. Click that, wait for the entire file to come down and, voila!, you have a video. Go ahead and turn that ad-blocker back on now.

Create an Internet Archive account

While, sure, you could upload your video to YouTube, vimeo, or some other free streaming service, in the spirit of open/free/libre and open access I always upload mine to Internet Archive. I used to work at the Archive, so I confess to no small amount of bias here. But who among us can argue with a mission of Universal Access to Human Knowledge? If you upload your video to the Archive you're guaranteed that it will be free (in all senses of the word) and accessible in perpetuity.

Internet Archive is an accredited library, so to upload to it you need a patron account. Naturally, as with any good library, patron accounts are freely available.

To create or access your patron account, vist the Archive and click the Sign In link. From here you can either sign in with your existing patron account credentials or create a new one.

Upload your video

While it's possible to use an API or a script or Python library to upload your file, today I'll describe how to do it using the Internet Archive upload tool.

Click the Upload icon on the Archive front page:

pic of the front page

Click the “Upload files” button on the tool and then drag your file onto it:

pic of the drop here

After doing a bit of parsing on the filename, the tool displays some metadata fields. Many of these are required to proceed:

pic of the metadata form

Enter the appropriate information then click Upload to start the process. The tool presents a progress bar while it does its work.

pic of the progress bar

After the file upload is complete, the display changes to the Internet Archive page for your file:

pic of the item

If you look at the box on the right side of the page you can see that the file is there and available for access. However, it won't be available for online viewing–nor can changes be made to this page–until the file receives further processing. Once that processing is complete, the file will be available for online viewing:

pic of derived item

Voila!

That's all there is to it! Your video is now available for sharing. The Archive will preserve it and make it freely available in perpetuity. At this point you can click the Edit link next to the title and add or change both the metadata and the files in your Internet Archive item.

How NOT to Hire an Entirely Remote Workforce

Recently there was an article in the Harvard Business Review about how a particular company hired a 100% remote work force.

I highly recommend hiring a remote workforce. I have a lot of experience helping companies do so and even present talks on the subject. I wholeheartedly agree with the article’s author: there are incredible benefits to supporting a remote workforce.

That said, the Clevertech hiring process as set forth in the article is not only a bad way to interview for and hire remote workers, it’s a bad way to interview for and hire ANY workers. That such poor advice was provided in a publication as well-respected as the HBR is disappointing, to say the least.

From my nearly 20 years of experience in this industry, I cannot see anything in the Clevertech process which can lead to hiring and building more effective remote teams. I would not recommend these processes for any team, and especially not for those which are distributed by default. Instead, the processes set forth in the HBR article would lead to expanding the echo chamber of an organization and blocking the hiring of those with new and potentially challenging viewpoints.

This post will help to clarify why the Clevertech process is not one which companies should follow, either when hiring on-site or remote employees. It is a master-class in how not to run the hiring at your organization, unless, of course, you are looking to minimize the diversity within your company.

Let’s start where the article starts: the job description. The Clevertech approach is to make the job description as vague as possible in order to entice curious onlookers to ask for more information. The claim appears to be that this somehow attracts employees who will thrive in a telecommuting environment, though it’s not expressed how vagueness of job posting aids in this.

Making a job posting at all vague is arrogant and disrespectful to candidates. You, as a company, are starting your relationship with your potential employees by playing games when instead you should be respecting them as professionals and providing them the data required to make an informed decision.

For those who wish to telecommute, expressing clearly and unambiguously that the position allows it is usually enough enticement for them to continue reading, if not also to apply. Not providing additional details, however, is going to lead to a large number of unqualified applicants and a commensurately large amount of staff time spent filtering out and declining these applicants.

Wasting the time of staff and candidates is unfortunate, but it’s not as problematic as the exclusionary nature of a vague job posting. Studies show that women only apply for jobs if they feel they meet 100% of the requirements.

If no requirements are listed at all? It’s unlikely that many women will apply for the Clevertech positions. Vague job postings aren’t only exclusionary to women, but also to people of any gender expression who do not think exactly like everyone else in Clevertech. This leads to a lack of diversity of ideas as well as of genders.

The article then continues by extolling the virtues of a “Log in with Google” call to action. The reasoning provided is that “If someone doesn’t have a Google account and isn’t willing or able to set one up, that person probably isn’t advanced or flexible enough to work remotely and positively impact our company.”

Again, the article does not explain why the willingness to have a Google account is in any way generally advantageous for remote workers. Without a valid explanation of that, this particular data point is invalid as to the topic at hand: advising how best to hire telecommuting employees.

This is another example of how the Clevertech hiring process is exclusionist. There are a great many reasons why a person may not be willing to have an account on a particular service. For instance, I know many privacy advocates who have either avoided signing up for a Google account or who have closed the one(s) they had open.

If Clevertech is using the “Log in with Google” as a way to filter out people who will not use Google on principle, that’s their choice to make. However, stating that the reason for the requirement is that it leads to candidates who are better qualified to telecommute is both disingenuous and unproven. There is no correlation–let alone causation–between the two.

But that’s OK, because the article continues, “If candidates are put off by our unorthodox approach, we know immediately that they are not a good fit for our firm.”

In tech we are more and more often supplied proof that the phrase “not a good culture fit” is code (subconsciously, perhaps) for “not exactly like us” and a sign that a company may not support a diversity of thought or of hiring.

If I may be allowed for a moment to judge purely by appearances, Clevertech does, indeed, appear to fit into that camp. 94% of its work force is technical in nature (IT or development). 4.8% of its workforce is female. 0% of their technical workforce is female. Zero percent.

There is not enough data from this one article to show that hiring processes of this sort lead to a lack of gender diversity at Clevertech or any other company which uses them, but there is certainly enough data to cast suspicion upon such practices and processes.

Another arrow in Clevertech’s remote hiring quiver is a “badge” system, wherein the employees get to bequeath each other with points every month for embodying the company’s core values. Again, it’s unclear how this helps when hiring remote employees. It’s even more unclear how this helps within the company itself.

Just a tip, Clevertech: If you have a numbers-based gamification in your company, the engineers WILL game that system. When your organisation is 94% percent technical staff, either the system is already being gamed or the engineers don’t care enough about the system to bother gaming it. If 94% of your staff do not care about a policy, drop it and take the time to find a substantial method for appreciating the contributions people make to the company, its culture, and its values.

Clevertech’s interview process relies upon the candidates being willing to video themselves answering supplied questions. The reasoning behind this is that it shows the interviewers how the candidate reacts under pressure. If you’ve ever been in a software development department when there’s a crisis situation, you know that how one is able to perform on camera is nowhere near the top of necessary criteria for resolving the issue at hand.

All interview processes for all positions, remote or otherwise, should relate 100% to the position for which the person has applied. For a newscaster, performing on camera is a vital part of the job and therefore asking them to create a video is a valid part of the interview process. For an engineer or IT professional, it is not. Asking engineers to make videos to supply answers to questions is quite silly and a waste of everyone’s time. It will not provide much information which is relevant to the position and the problems it will face. It will, however, turn off candidates who are not extroverted, who are not comfortable in front of a camera, who are afraid of being discriminated against because they are older or are fat or are female or wear religious artefacts or are disabled or are not precisely like the rest of the team they had applied to join. Requiring candidates to send videos is a gating question. It allows the company to filter out those who are not like them. Who are not “a good culture fit.”

The article states that the video questions asked of the candidates help to filter out those who are “…put off by the intensity of the questions…” as well as to attract “…applicants who respect the high level of our questions…”. Setting aside for the moment that neither of the example questions provided were either intense or high level, it’s worth considering that these particular questions and the method in which they are presented and evaluated are not, again, optimized for the hiring of remote workers. What they ARE, in fact, is optimized for filtering out those candidates who are not 100% in sync with the existing opinions and norms of the organization. Once again, these are questions which lead to groupthink and a dangerous lack of diversity both of people and opinions.

As you can see, there are no hiring or interview tips raised in the article which are in any way connected with whether a candidate will be a good telecommuting employee. Instead, almost every tip is one which can lead to a lack of diversity of genders, backgrounds, and opinions in a company. A truly innovative company values these differing views and strives both to maximize them in their workforce and to leverage them for the flashes of brilliance which they bring to otherwise mundane situations. Hiring practices such as those expressed in this article are a bad business practice and bad for the bottom line. Please do not use them.

For resources on how to hire and work with remote teams, check out this curated list of resources.

Ops/DevOps Learning Resources

Last night I was speaking with a junior engineer who’s entering the job market. She’s had no trouble getting development internships (she has four of them under her belt right now) but has been having a hard time finding ways to learn about the area which most appeals to her: Ops and Infrastructure. That led me to post the following tweet:

Ops-school Tweet

Desperately Seeking Ops Schools

The hope was that this would turn up some online resources where a person could start to learn the arcane mysteries of ops and devops, but it didn’t really work out as well as I was hoping. While people did provide some resources (more on that below), overall it appears there aren’t nearly as many structured learning opportunities for ops as there are for programming.

And that got me wondering… just how are people learning this stuff? Ops and infrastructure is a tricky and complicated topic. It’s the foundation on which the entire internet is built. Thousands of people around the world spend their days setting up, scaling, monitoring, and maintaining infrastructures large and small. But how did they learn how to do that?

I acquired my own far-from-comprehensive ops knowledge in an ad hoc manner: Something would break, I’d virtually or actually tag along as it was fixed by someone more knowledgable. That was a perfectly cromulent way to pick up some basic knowledge, but doesn’t exactly scale to professional levels. And it certainly wasn’t actionable advice to give to a young engineer. What to do?

Well, crowdsource it, of course.

I’ve created a new repository in Github: devops-learning-resources. It currently contains all of the resources which people shared in reply to my tweet along with a few others which came to mind. Do you know of additional resources? Pull requests gratefully accepted!

Let’s do this thing, people. Let’s make it easier for aspiring ops people to find the resources they need to support our infrastructure for years to come.

Oh, and if anyone is interested in hiring an ambitious and eager junior devops person, let me know and I’ll put you in touch with her.

Open Source Leadership Succession Plan?

I present at a lot of FOSS conferences and therefore have the chance to meet and speak with a lot of FOSS luminaries. These are inspiring people who’ve been working with, for, and on FOSS since the very beginning of the movement and who are still playing absolutely vital roles in FOSS at a leadership level. These are the people we all consult when forming a new foundation, creating a new license, or open sourcing an internal project. Most of the individuals who are working at these conceptual and policy levels of FOSS have been doing it since the beginning and helped to craft the history, the law, the processes, the politics of Free and Open Source software. It will be difficult to replicate that experience and knowledge.

But here’s the thing: We are, each one of us, getting older.

Some day the Tim O’Reillys, the Danese Coopers, the Simon Phippses, the Allison Randals, the Karl Fogels, the Bradley Kuhns, the other luminaries of the FOSS world will want to move on and/or retire. And well they should, as they’ll have more than earned a break for all the service they’ve given FOSS.

As I look around the ranks of FOSS policy leadership, I see all these great people but I see few to no younger leaders. These people have been serving us so well for so long that perhaps we’ve just had no need to supplement them with additional assistance and, in truth, it would be difficult to do so. Which I believe is precisely why we need to start thinking about this now before it’s too late.

So I have to wonder: do we in FOSS have a succession plan for these luminaries upon whom we’ve learned to rely? Are there programs and initiatives for training and mentoring the next generation of FOSS policy leaders? There are plenty of people working to build up the community leaders of tomorrow, but are we devoting enough attention to the policy and legal side of things?

Perhaps we are. I pay a lot of attention to what happens at that level of FOSS but won’t pretend to know everything which is going on. Mostly I just wanted to pose the question to see what thoughts and insights people have about the matter.

Adding a Slack network to your znc IRC bouncer

Slack is taking the online world by storm. Most everyone I know in technology is a member of at least one Slack teams, and it’s not rare to find people who are a member of ten or more Slacks.

My personal opinion of Slack is indifference. It’s pretty much just IRC with a fancy new paint job. I love IRC, so that element of the Slack service appeals to me greatly. However…that paint job…no. There’s just too much going on there. Too many gifs. Too many emoji. It’s very distracting.

Add to that distraction the burden of having yet another damn chat client on my desktop. IRC, Hangouts, Skype, iMessage, Twitter… It’s just too much.

Thankfully, Slack offers an IRC gateway, allowing you to escape that loud and distracting paint job while you also reduce the number of chat clients you need open. You can either connect to Slack directly using your IRC client of choice or–if you want scrollback from when you’re offline–you can set it up on your IRC bouncer. The latter is the option I pursued. As it was rather a pain in the ass, I’ve captured the steps below so you don’t have to suffer the same headaches I did.

CAVEAT: I use znc as my bouncer (installed on a Digital Ocean droplet), the webadmin interface to znc for all admin tasks, and Textual on OS X for my IRC client. All information below is presented in the context of this software.

Setup on the znc side

  1. If they haven’t done so yet, please ask your Slack administrator/team owner to enable the IRC gateway.
  2. Once that’s done, access your account gateway page. It’s available at http://$TEAMNAME.slack.com/account/gateways. You’ll need this information for the next steps.
  3. Log into the znc web admin interface, edit the user which will be connecting to Slack, then click “Add” in the Network box. The full click path for this is Manage Users > Edit (next to selected user) > Add (in Network box).
  4. Most fields on the Add Network form are what you’d expect, but there are a couple of potential snags…
    • The network name must be alphanumeric and cannot contain any spaces. If you receive the following error, check the network name for invalid characters: "Invalid network name. It should be alphanumeric. Not to be confused with server name
    • The server address should be entered in the following format:
      $TEAMNAME.irc.slack.com +6667 $PASSWORD_FROM_GATEWAY_PAGE
      The + is important because it denotes that SSL should be used. Slack doesn’t allow unencrypted connections.
  5. That should be it. Save the form, then move on to setting up your IRC client.

Setup on the Textual (v5.0) side

  1. Open the interface for adding a new server: Server > Add Server
  2. Name the connection whatever makes most sense for you.
  3. Enter the address of your znc bouncer in the Server Address field.
  4. Check the “Connect Securely” box since, again, Slack doesn’t allow unencrypted connections.
  5. Enter your znc user password in the “Server Password” field.
  6. Select the “Identity” option (in the left side-bar in Textual v5.0).
  7. Enter your Slack nickname in the “Nickname” field.
  8. For “Username”, enter your znc username, a slash, then the Network Name you selected when adding the new network in znc: $USERNAME/$NETWORK_NAME
  9. For “Personal Password,” enter the password from the Slack account gateway page.
  10. Save the new server to the client.

Theoretically, you can now connect. The client will automatically find all of the channels to which you were already subscribed on Slack. You may need to tweak a few more options to make it suit your tastes, but the hard part is now done.

OSCON Portland, 2015

Oh, my. My last OSCON trip report is only two posts back. It seems I’ve been rather a slacker on the blogging front. I’ll see what I can do to change that. In the meantime, here’s my OSCON Portland, 2015 trip report.

This was my first OSCON as a Portland resident, which was as lovely as it was exhausting (both: very). When you know as many people as I and live in the city hosting OSCON, your conference lasts multiple weeks as people arrive and depart town. Next year the conference moves to Austin, so I’m grateful I had at least one opportunity to experience a hometown OSCON.

I only had one talk at the conference this year, and that one not scheduled until the final slot of the final day of the conference. It was well-received and surprisingly well attended, considering the other amazing speakers also in that timeslot. This was a talk which my co-presenter and I have given before and which required only minimal edits, which you would think means that I’d have plenty of free time to attend sessions. Unfortunately, that was not the case. For me, this OSCON was filled with meetings and greetings and hobnobbing and discussions. All were with great people and were productive, but it did impinge upon my session attendance.

However, it wasn’t all meetings and I did get the opportunity to see many really great speakers:

  • Presentation Ninjitsu, presented (as wonderfully as you’d expect) by Damian Conway
  • Rolling dice alone: Board games with remote friends, presented by Tim Nugent. Not only was it incredibly entertaining, Tim also did a great job teaching the audience about the philosophy and psychology of games. It also made me wonder whether it’s possible to apply some of his ideas and research on remote gaming to managing remote teams. That could be an interesting talk.
  • How Do I Game Design? Design games, understand people!, presented by Paris Buttfield-Addison, Jon Manning, and Tim Nugent. This talk expanded upon some of the fascinating philosophy and psychology upon which Tim’s touched the day before. Unfortunately I was only able to see half of this session, so I’m very eager to finish watching it when the videos come out.
  • Test Driven Repair, presented by Chris Neugebauer. Chris did a great job debunking the myth that you can’t do TDD on legacy projects. His approach was as useful as it is logical, and his “even one test is better than no tests at all” makes the approach accessible as well. This is one video most every team should watch once it becomes available.
  • Open sourcing anti-harassment tools, presented by Randi Harper. A somewhat controversial session (more on that in a moment) about the tech required to help people avoid online harassment. Both the session and the questions were almost entirely about architecture and technology. It was inspiring to see what Randi has been able to do with a few lines of Perl code, despite the immense burdens inflicted by her own online harassers.
  • As well, I caught every keynote, the videos for which are already online.

And then there were the talks I sorely regret having to miss:

Overall the conference was amazing, yet there was a dark cloud over the final couple days of the event. Some people rich in opinions but poor in manners took umbrage at OSCON accepting Randi Harper as a speaker. These people flooded every O’Reilly Media inbox and phone line they could find with demands that Randi be dropped from the schedule. When that didn’t work, they stamped their collective little princess foot and spammed the #oscon Twitter hashtag with their complaints, making it entirely unusable. Many of us speakers were perturbed by being unable to use the hashtag, so we suggested to O’Reilly that it install Randi’s own project to help improve the signal-to-noise ratio. The organizers of the conference considered their options and found this one to be best, then asked Josh Simmons–the community manager for OSCON–to install the project but only for the duration of the conference. WE SPEAKERS suggested this, O’REILLY ORGANIZERS agreed to it, JOSH SIMMONS became the target for abuse and harassment. He handled it remarkably well, which is a testament not only to his strength but also to his devotion to the community he manages and which supported him in return both in his actions and in his need. I confess, I have largely ignored the movement which caused all of this mayhem and have largely remained agnostic as to their controversy of choice (they simply have not been worth my time). But now that I have seen their methods first-hand, I have formed an opinion and it is a strong one. Pro Tip, kids: If you want to win friends and influence people, don’t attack the innocent lest we all see you for the cruel bullies that you are.

Aside from that, though, this was by far my favorite OSCON I’ve yet attended. The subjects were engrossing, the speakers were world-class, the people were kind, inspiring, thoughtful, and hilarious (often all at the same time). Before this OSCON I was on the fence about whether to head to Austin next year. Afterward, I immediately booked my hotel for 2016. Hopefully I’ll see you there!

Badass: Making Teams Awesome

I read Kathy Sierra’s BADASS: Making Users Awesome back in February and haven’t been able to get it out of my mind since. The premise of the book rang true in a way I’ve not experienced from a book for a very long time. Reading it leads to the sort of, “well, DUH” moment which only follows when you come across an idea so brilliant and genius that it seems–in retrospect–so obvious.

Judging from the reviews on Amazon, O’Reilly, and elsewhere on the net, I’m in very good company with appreciating the book and the value it provides. Thank you, Kathy, for this great tool you’ve given us.

Whenever we read a book, we do it from our own unique point of view. I’m in tech management, so most things that I read are viewed through a managerial filter. This book is no different, which is why it has stuck with me so tenaciously over the past few months. Read with this perspective, BADASS is one of the most insightful management books I’ve had the pleasure to experience.

“But wait!” you protest, “This is a book about user experience! About product management! What do you mean it’s an amazing management book? You, my dear, need to smoke more mad crack.”

To that I reply, “You have an adorably limited definition of ‘user’ and ‘product’.”

Simply speaking, a product is anything which you produce. Unless you’re an assembly line (in which case: my condolences), you produce things through skill and craftsmanship. As management, it is my job to help produce effective teams. It is a job which I take very seriously. It is not easy and it requires a lot of knowledge, experience, and time to do it properly, as does any craft, but the end result is always worth the effort.

As for user, there’s an old chestnut which says that only drug dealers and software developers call their customers “users.” But, that aside, a user is anyone who avails themself of or benefits from your product. Very loosely speaking, from a management point of view a user is someone who benefits from the team you’ve built, including (and especially) the members of the team itself.

Within this context, then, many of the concepts from BADASS are highly applicable to building strong, effective, and cohesive teams.

For instance, on the topic of performance:

Technical definition of badass: Given a representative task in the domain, a badass performs in a superior way, more reliably.

If performance can’t be evaluated in some way, we can’t help someone build it.

On coaching:

The difference between extrinsically (external) vs. intrinsically motivated experiences is the difference between short term and sustained motivation.

They [the users] don’t want to be badass at our thing. They want to be badass at what they can do with it. They want badass results.

In the perfect scenario, we give our users as many options as they could want or need, but we also give them trusted defaults, presets, and recommendations. Especially in the beginning, we make decisions so our users don’t have to. Be the expert, the mentor, the guide.

On productivity:

Make sure your users spend their scarce, easily drained cognitive resources on the right things.

On success:

The key attributes of sustained success don’t live in the product. The key attributes live in the user.

When you’re more skilled at something, it’s as though a part of your world got an upgrade. It’s as though pre-badass-you had been experiencing the world in Standard and now a part of the world has become High Resolution.

And on YOU:

They don’t need you to be perfect. They need you to be honest.

These are only a few examples of unexpected nuggets of managerial wisdom in this book. In fact, most of the ideas espoused in the book are applicable to many different walks of life. It really is a remarkable piece of work and one I recommend to anyone who wants to help make life easier and better for those around them.

OSCON 2014

Another year, another OSCON. Now that I’m approaching caught up, here’s a quick recap of that busy week.

It was a busy OSCON for me this time around. I had two talks to give, only one of which I’d finished writing before I arrived. Oops. Ah, well. They both went well and were well-received and -reviewed, so I’m a happy camper. The slides and my webcam videos of both talks are available on Internet Archive:

Because my current job is so variable, my session attendance was equally variable. I didn’t really focus on any specific things, instead giving preference to talks by people I know and respect.

And, of course, there was the evening of Perl goodness: rjbs with a whole hurricane of lightning talks (in lieu of the State of the Onion, since Larry’s recovering from eye surgery), then a dozen or so individual lightning talks. As usual, rgeoffrey ran a tight ship and kept the talks moving smoothly.

Really, though, most of my time was spent sitting or standing around and talking to people. So many people. So, so many people. I’m not even going to attempt to list them all, but it was all time well-spent. I’m grateful for the opportunity to meet and befriend so many amazing and inspiring people.

One thing that did not happen that week was productivity. Beyond talking (both on stage and off), I accomplished little of substance. Not only does that make this subsequent week more difficult, it’s also an opportunity missed. I could have done some excellent collaboration that week. So when I read rjbs’s OSCON trip report I was intrigued by his idea to set up shop at a table & just get shit done next year. I’d back that play. I have plenty of projects which need love and attention, so having a personal OSCON Hackathon wouldn’t go amiss. Thanks for the idea, Rik!

Overall it was a great trip but an exhausting. I’m honored that I was once again invited to speak and overjoyed to have spent so much time with so many great friends, both old & new. You’ve filled my head with knowledge and ideas and wowed me with your accomplishments. Now I just need to figure out how best to apply all my newfound inspiration.

San Francisco Perl Mongers: 12 months, 50% growth

A Timeline

On June 21st, 2013, Fred Moyer asked whether I’d like to discuss becoming a co-organizer for San Francisco Perl Mongers. On July 5th it was made official. Earlier this year I was promoted to primary organizer, Fred stepping aside to focus on some real life matters (though he still very much loves and is involved with SF.pm).

SF.pm: One Year In

Therefore this marks, more or less, my one year anniversary of SF.pm organizing. That seems as good as any reason for a recap. So, what’s happened in the past year?

  • We’ve held ten events.
  • We’ve been honored to host 16 different speakers (thank you, Lightning Talks, for bumping that number 😉 ).
  • We’ve added five new sponsors. (though we’re always on the lookout for more!)
  • We’ve started recording all events and making them available in our SF.pm collection on Internet Archive. (a post about how we do this is in the pipeline)
  • We’ve added 208 members, going from 394 members to 602.

That one bears repeating: San Francisco Perl Mongers has increased its membership by over 200 people in a single year.

What gives? How’d we do it?

First of all, let me be very clear: I don’t believe for a moment that these are 602 engaged members. Many are lurkers. But they’re lurkers who took the initiative to sign up and who receive our messages about Perl and its community. It’s 208 more people seeing those messages than were before, which—engaged or not—is a win in my book.

Also, another thing to get clear: I did not do this alone. While I am now the primary organizer of SF.pm I am by no means the only organizer. Fred, Joe Brenner, and Jeff Thalhammer deserve equal share in the credit.

Now, how’d we pull off this feat? As you’d expect, it was a multi-faceted approach:

  • Flexible scheduling. After Fred asked me to lend a hand, I started meeting with some local Mongers to get some feedback on where SF.pm has been and where they’d like to see it go. A lot of them said they were no longer attending because there were too many other meetups which landed on the usual SF.pm meeting night. So I scrapped the set “last Tuesday of the month” date in favor of a monthly event which would float to wherever it worked best that month. This allowed for a more diverse pool of potential attendees. Rather than just seeing the same faces each time, we now were seeing people who hadn’t been able to attend either ever or for several months at a time. As well, having a flexible meeting date made it easier to mesh our schedule with that of potential speakers.
  • Diverse content. How many of you work with Perl and only Perl, no other technology? No Javascript, no ops, no continuous integration framework, just Perl? Bloody well none of you, I’d wager. So why was our SF.pm content 100% focused on Perl? We’ve changed that. We still feature Perl in some way in almost every event but often the primary focus of an event has been expanded to “of interest to the SF.pm community.” Some of our most popular events in the past year have been about Data Science, MongoDB, and Docker.
  • Cooperation and collaboration with other communities. Our content is great, our community is amazing. Why should we keep these things to ourselves when others can benefit? Therefore in the past year we’ve been cross-posting many of our events with several other local tech community user groups. We’re Perl, so there’s more than one way to do it. That includes choice of language, so we’ve been thrilled to welcome new members coming in from the local Ruby and Python communities. The additional perspectives help enhance the experience for everyone and we’re very grateful for it. Special kudos go out to SF Ruby, who’ve been particularly welcoming of messages coming in from an external group. The SF Ruby gang really groks that we’re all stronger together than apart and that great learning opportunities can come from anywhere.

Someone else who groks this: John Anderson, aka genehack. I was really thrilled, when watching his YAPC::NA 2014 keynote, to hear him espousing many of the same steps which we at SF.pm were already taking. If you watched that talk and thought he was smoking mad crack, I’m here to tell ya: OK, maybe he was, but his suggestions work and we’re proof of it. Thank you, John. You’re my kind of crazy.

SF.pm: The Future

This post is already taking longer to write than I’d hoped, so I’ll try to wrap it up quickly. What’s next for SF.pm? What will the next year look like?

Right now there are no official plans, but here are some of the things I have rolling around in my head:

  • Update the website. The SF.pm website is…yeah. It’s dated. The only thing standing between us and a nice, clean, Bootstrap-y site on GitHub Pages is me carving out half a day to futz with the thing. It needs to happen, and it’s firmly on my radar. Perhaps I’ll stockpile a lot of round tuits at OSCON this year and use them for this purpose. 🙂
  • Engage more of the membership. We have 602 members, but they don’t really communicate that much. I’d love to get them talking a bit more, both among themselves as well as in front of the group. 602 people represents a vast amount of knowledge and I’d love to tap it so they can share their experiences with everyone.
  • Develop some sort of newbie program. Back in January 2013 I griped that SF.pm (and Perl in general) needs better outreach for newbies. I still stand by that statement. Another way I’d like to engage that burgeoning membership is to get their assistance to develop some sort of program to introduce more people to programming in general and Perl in specific. This is definitely a place where I won’t be able to go it alone.
  • Strengthen and increase collaboration with other communities. That assistance for knowledge sharing and new programmer development doesn’t necessarily have to come from our membership exclusively. When learning the fundamentals of programming (loops, functions, MVC, etc.), it doesn’t really matter which language you use. The concepts are easily applied anywhere. As well, other communities have a lot more experience organizing things like hackathons and workshops than I do. I’d love to collaborate with them to help the entire SF tech community expand their horizons.

Those are a few of the ideas I’m having. Maybe they’ll happen. Maybe they won’t but others will. Hey, that’s cool. All I know is that thanks to its amazing members SF.pm will continue to be a strong and growing community for years to come.

Moving On From The Errors of GitHub and The Ada Initiative

The Story So Far

  • On March 15th of this year, Julie Horvath–a well-known and -respected advocate for gender equality in technology–left her job at GitHub. It caused a bit of an uproar among technologists on the internet.
  • On March 16th, GitHub issued a statement that they were investigating the allegations and that the accused parties were placed on leave or banned from the company’s offices.
  • On April 21st, GitHub issued a statement of findings. This one declared that the investigation was complete, that it had found no evidence of wrong-doing (but did of mistakes & error in judgment), but that the accused parties had left the company regardless.
  • On that same day, Julie Horvath responded to the announcement.
  • On April 23rd, The Ada Initiative issued a statement that they were severing all ties with GitHub as a result of the situation.

Before I dive into the rest of this post, there is one fact which I would like to make crystal clear:

A hostile environment–work or otherwise–is NEVER OK. If you are in an environment which makes you uncomfortable, please immediately do whatever is necessary and legal to get yourself to a safe place. If you witness a hostile environment, speak up. This is everyone’s problem and we all have a responsibility to make sure our friends, families, colleagues, and fellow humans are safe.

Therefore I fully support Julie’s decision to quit GitHub. She states that–for many reasons–it was a hostile environment for her. Staying was not an option. She had the awareness to recognize this and the courage to walk away.

This is not an article about Julie. This is an article about what happened after she left.

Shit Happens: GitHub Edition

When GitHub released its statement of findings on April 21st, the internet lost its shit (as it is wont to do), embroiling itself in its usual under-informed tea leaf reading. Some people expressed a desire to forsake the company’s services. Many more decried the company for it’s “non-answer.”

I confess that I, myself, am dissatisfied with GitHub’s statement of “findings.” In my opinion, it is a legalistic and opaque answer to the questions its community has wanted and needs resolved in order to mend damaged trust. This is counter to the culture of openness which GitHub has fostered. It has shared none of the details of the investigation, instead asking an already skittish and suspicious community to just take it at its word. This was a clumsy (but perhaps necessary) move on their part. GitHub either did not know or entirely disregarded how this sort of a statement was going to further damage their reputation.

Shit Happens: The Ada Initiative Edition

One of the most visible and potential damaging reactions to GitHub’s statement of findings came from The Ada Initiative, when they publicly denounced GitHub and dissolved their partnership with the company.

While I respect the mission of The Ada Initiative and believe they have nothing but the best interests at heart, if GitHub’s statement of findings was clumsy then Ada Initiative’s reaction to it was a pratfall. Rather than taking this opportunity to further its mission by assisting a company struggling with turmoil induced by alleged gender-insensitivity, Ada Initiative instead chose an emotional and reactionary path which removes repository access for underserved and at-risk individuals.

The Ada Initiative statement declares that “The sum of these events make it impossible for Ada Initiative to partner with GitHub at this time,” but it does not actually detail the umbrage which it takes against “these events.” The entire judgment call is left as an exercise for the reader, under, one must imagine, the false assumption that everyone who reads the statement both now and in the future will grok the wrongs performed here.

As well, Ada Initiative canceled their partnership with GitHub but did not tell us either what steps they took to correct the offenses prior to the cancellation, nor what they request of GitHub to mend the wounds. All we are told is “We will not accept future sponsorships from or partnerships with GitHub unless the situation changes significantly.” This is hardly a constructive or actionable statement.

While The Ada Initiative is definitely taking a strong stance here, it is doing so by causing harm to the community it is sworn to protect and uplift. Rather than assisting a company to learn how to make a safe workplace, it has turned its back on it. The Ada Initiative is willfully ignoring an opportunity to make a positive difference.

Moving Forward

There are two things which ought to happen for everyone to move forward and make something good and productive out of this otherwise ugly situation:

GitHub needs to be more open.

I suspect that the GitHub statement of findings was as legalistic and opaque as it was because there may be some potential pending legal proceedings. This would tie their hands as far as what they are allowed/or advised to say by their counsel.

However, if they wish to mend their growing poor reputation, they still should make an effort to be more open about what has and is going on over this issue. They should say as much as possible, and then also tell their community what they cannot discuss and why. Stop the weaseling, start the openness.

Part of this openness must be a discussion of what they are doing to make GitHub into a safe environment for all members of the company, acknowledging that past efforts–however well-intentioned–may have failed and detailing how they hope to change the process and the culture.

The Ada Initiative needs to work WITH GitHub, not against it.

I recognize that Ada Initiative is trying to protect the community and I respect that. However, they should consider making a statement of concern but holding off on their cancellation of partnership with GitHub. Instead, they should consider reaching out to GitHub with a cooperative plan to help improve gender/minority sensitivity in the company. That would be much more in line with the mission of the organization and more productive than walking away in a huff without, from the available evidence, trying to work things out with GitHub first.

I leave on this brilliant and insightful tweet from Nicole Sullivan:

Let’s all please keep that in mind rather than immediately jumping to the worst possible reactions. These organizations are trying to do the right thing. They’re just making mistakes along the way. It happens. Let’s make those mistakes productive rather than cut them down for making them.