Greg.Brim.Net
Greg.Brim.Net

I'm gholt on GitHub and I've worked on various projects, but most notably on OpenStack Swift. Or maybe that's Swauth, oh, or Swiftly. Well, they're all Swift related anyway. Though most that work is in Python, lately I've been working a lot in Go aka GoLang.

I'm also a Principal Engineer at Rackspace; though my opinions here should not be attributed to the company itself.

I'm not currently looking for a new job, but here's my resume in html or pdf if you're interested.

I've been working on Swift for over five years now and I find it's time for a change. I'll still be lurking around from time to time and definitely feel free to contact me if I need to explain some weird thing I wrote that makes no sense anymore. I'll also be doing my best to maintain Swauth and Swiftly and I still hope to write more posts on how to use Cloud Files / OpenStack Swift. But, a change in focus will be good. Yes, I'm still at Rackspace.

File this under "Things you keep ignoring until it finally drives you mad; then you look it up and find out it was so damned simple the whole time."

xsel | xsel -b will copy the mouse selection clipboard to the Ctrl-V clipboard. Flip where the -b is to do the opposite.

I should read Glyph more often. I never got into Twisted, I always felt like I was installing an entire operating system when I tried to use it. And at the time the parts I was interested in were barely maintained. Of course, that was years ago.

Anyway, this post isn't about that, it's about Glyph's decision to forgo any comments sections on his site.

More...

Glyph's Unyielding post is a good read. As a heavy user of Eventlet, I completely get what he's talking about. In my heart I know "magic" concurrency is bad, but I still use it because I'm lazy. I'm not too proud to admit that. ;-) But once I finally make the move to Python 3 (definitely waiting for 3.4 and asyncio) I should force myself to quit being lazy and do concurrency correctly.

Unfortunately, it's gotten to the point where I'm lazy automatically. As my mom likes to say, I often put a lot of effort into being lazy when it would've been easier to do the right thing first. It's carrying over to other languages now. Take my recent foray into Go. There I went out of my way to hide the concurrency from the user of the code. I should go back and fix that. The problem is, I didn't even think about it when I did it. It's just automatic now. Automatically wrong.

If you're new to a project but really want to jump in and contribute, I'd recommend talking with those already working on the project first. Get to know them a little. Maybe ask what "busy work" is around that nobody has gotten to yet where you might get your feet wet.

If your first introduction to the existing people on a project is a bunch of code submissions with simple formatting changes or spelling and grammar fixes, you're not going to make a good first impression. Even worse is if your first introduction is something like switching the order of parameters on an equals comparison or some other such nonsense.

On the other end of the spectrum, if your first introduction is a code submission refactoring a huge part of the core of the project, again, you're not going to make a good first impression.

Working on an open source project is as much about the people involved as it is about the code and what you (or I) think is perfect. Sure, arguments are going to occur, and often over trivial things. But at least get to know the folks before you poke at them.

I've continued on my quest to make a Swiftly clone in Go in order to learn Go. I've created an authentication service connector to handle communicating with the external authentication service.

More...

I'm trying, once again, to learn Go. This time I set a useful goal: Make a basic Swiftly clone in Go. Who knows if I'll acheive it or how long it'll take me.

More...

Swiftly 2.02 was released today. Biggest new feature is an optional configuration file (finally). Source available, of course. Here is the full change log:

More...

When setting out to use Cloud Files, planning is important. You don't want to find out after you've uploaded a ton of data that you could've had a better layout. In this post I detail some considerations and best practices when using Cloud Files.

More...

Listing containers and objects is a bit involved due to the large amounts of data that may be present. In this post I show examples of how to make use of Cloud Files listings.

More...

In this post I describe how to create and delete containers in Cloud Files.

More...

Though Cloud Files is really inexpensive, it is nice to know how much you've used on occasion. An easy way to do this is through the Rackspace Cloud Control Panel, selecting your user name (account number) at the top right, and then Usage Overview. But the storage information is also available from the Cloud Files API itself, as well as more granular information per container and object.

More...

To begin using Cloud Files programatically you need to obtain an authorization token from the Rackspace Identity authentication system. This system is becoming closer and closer to the OpenStack Keystone authentication system, but there will probably always be some differences. Remember, you can always log into the Rackspace Cloud Control Panel for quick tasks. But writing applications that make use of the services is where the power is at.

More...

As of this writing, Swift has grown to 28,186 lines of code. A year ago it had 21,703. In January of 2012 it had 17,316. 2011 had 13,761. That's 25-30% a year; maybe we've added that much more functionality each year but it sure doesn't feel that way to me. The swift/obj directory has doubled this past year and I'm pretty certain it doesn't do twice as much. Even if you exclude my ssync additions (686 lines), swift/obj has increased 70% and pretty much does exactly what it used to.

I'd love to see the Swift codebase get tighter; but I guess I'm part of the problem. :/ If wishes were horses...

Static large object support is complementary to the dynamic large object support. Each has slightly different advantages and disadvantages and which to use depends on your requirements.

Dynamic large objects are based on a container listing of segments. This makes them very easy to change and they can have an unlimited number of segments; but their segments are restricted to a single container and subject to the eventual consistency delays of the container listing. They also always show 0 bytes in container listings.

Static large objects are based on a static list of segments given at creation time. This makes them able to use many containers and they are unaffected by any consistency delays of container listings, but they are a bit harder to create and alter and are limited to just 1,000 segments. They do have the benefit that they show their full size in container listings, which may be required for your use case.

In this post, I will show how static large objects work.

More...

The Temporary URL feature of Cloud Files is neat, but slightly complicated. It allows you to generate URLs to objects in Cloud Files without having to contact Cloud Files ahead of time. Furthermore, after a configurable amount of time, the Temporary URL will expire and no longer work.

Here I will show how to use Temporary URLs with Cloud Files.

More...

I found this note in an old collection of mine and thought I'd share.

More...

This started as just a post about how we got to the object server design in Swift, but I quickly realized I couldn't explain that without telling the whole story.

If you don't know what Swift is, it's a Distributed Object Store created by Rackspace and now part of the open-source OpenStack.

More...

In Part 4 of this series, we ended up with a multiple copy, distinctly zoned ring. Or at least the start of it. In this final part we'll package the code up into a useable Python module and then add one last feature. First, let's separate the ring itself from the building of the data for the ring and its testing.

More...

In Part 3 of this series, we just further discussed partitions (virtual nodes) and cleaned up our code a bit based on that. Now, let's talk about how to increase the durability and availability of our data in the cluster.

More...