Months ago, OJ Reeves was telling me bout Blogofile, a blog engine that generates static content from markdown. I tried to get a Blogofile rig setup and after about an hour or so of fiddling screamed something like “BLOGOFILE, Y U NO…” and gave up.
I get it; you have to find some way to market yourself, which means differentiation. If you’re well known in your field, the pressure to differentiate yourself increases. A little pot-stirring or boat-rocking can be a good thing. Upsetting the status quo and getting others to question their assumptions is a good thing.
I’ve been thinking a lot about how to make this decision and have recently put some interesting use cases together that have really helped me internalize and solidify what I believe is a solid foundation for deciding when to build it yourself.
I feel very strongly about this issue based on personal experience, education and just plain fear that some important points are getting missed. My frustration is definitely going to show in this post, so yes, it’s “ranty”.
Speed over quality only makes sense if you work in the fast food industry. Software development isn’t strictly engineering, math or art; the medium we work is too tractable for it to be limited to only aspects of any single fields. It is a unique discipline that requires knowledge and interest in many fields to achieve mastery. Craftsmanship, to me, is about acknowledging these elements of software development and how they are each instrumental in delivering real value to the customer that provides a return worthy of the investment made in our ability. In my opinion, far too much time is spent trying to reduce the many-faceted complexity of good development to some myopic name, phrase or platitude. Software development is neither simple nor for the light of heart. If you are either, flee. Here be dragons.
I’ve recently started working on a new set of architectural challenges. The core requirement, at a high-level, is process millions of transactions a day.
After reading about consistent hashing, it seems to me that it’s one of the best ways to implement APIs that can dynamically scale out and rebalance. Consistent hashing isn’t the entire solution though, it’s just the algorithm used for making consistent assignments or relationships between different sets of data in such a way that if we add or remove items, the algorithm can be recalculated on any machine and produce the same results (hence, consistent).
Not a day after I posted Using Reactive Extensions To Throttle Asynchronous Tasks, Josh Bush was already (kindly) saying “I think your code may have a problem”. The issue with the first example is two-fold: one it doesn’t really work as posted and two, if it did it would behave in a less than ideal way. Basically, it calls wait after each item. Not exactly what I was going for.