Here’s my attempt at describing eventual consistency. If I send an update to one system, then a minute later send that update to another system, my system will be inconsistent for that minute. Not quite. In my system there’s only one application that creates new data. Think of that application as the author of a novel. Updates to the novel are sent to other applications in the system. Applications receive updates at different times. Any application reading page 42 of the novel has read the same 42 pages that any other application has or will read. When at page 42, that application’s understanding of the novel is consistent with all other applications’ understanding when they’re 42 pages into the novel.
I have lots of records in my database. One application writes these records, many applications read these records. So I put them all in one database, and have all the applications access that database. Simple. But then I run into some problems.
Recently, I’ve ran into a few interesting talks related to testing. In this first talk, Steven Dow talks about some early studies on how prototyping has an effect on the performance of the end result. As you’ll see in the Q&A, people are hungry to know more about how we can structure tests for our ideas.
In his talk, Unsleash Your Domain, Greg Young presents a dense discussion of topics about which I am passionate. At its core, the talk is about how to guarantee a correct audit log and architect for scalability.
The Clean Code Talks concentrating on writing testable code. In his talk, Unit Testing, Miško Hevery explains unit testing and makes a case for unit tests.
During the first session of the Lightweight Languages 3 workshop (includes video of the talks), Dana Moore and Bill Wright presented ACME: Toward a LL Testing Framework for Large Distributed Systems (abstract), in which they described a distributed application which used the XMPP (aka Jabber) instant messaging protocol to communicate between nodes.
In his talk, Dynamics of real-world networks, Jure Leskovec discusses where to place sensors in a network to detect cascades such virus outbreaks or rising memes. I particularly enjoyed Jure’s explanation of how the cost-effective lazy forward-selection algorithm, which his team developed, helps to balance the cost of the sensors versus the reward of early detection.
Another podcast I listen to is Software Engineering Radio. The Debugging episode talks about how testing does not prevent the need to debug, that debugging is a search problem, and tools for estimating where source for bugs are located.
In his talk, Time and Attention, Merlin Mann talks about being conscious of what grabs your attention, negotiating requests for attention, and communicating organization culture around communication.
Here’s a common pattern:
Let’s implement an iterator.
Parleys.com publishes talks from software conferences. I like seeing high-quality slides alongside video of the speaker. Paryleys’ podcast is an audio only version of the talks you can watch on their site. Two of my favorite talks are:
I’ve received some requests to share some of my favorite podcasts. The first I’d like to share is a non-technical podcast called Big Ideas. It’s the only regularly scheduled program dedicated to the art of the lecture and the importance of ideas in public life. One of my favorite Big Ideas lectures is by Oxford University’s Harvey R. Brown. This lecture is a fun ride on the logic train, discussing motion and time. Enjoy.
Having moved to Java, I do miss closures. xUnit.net has a creative use of closures in their unit testing framework:
In his talk, Searching for Evil, Professor Ross Anderson discusses research done in collaboration with Dr. Richard Clayton, Tyler Moore, Steven Murdoch, and Shishir Nagaraja.
I would like to see design by contract become mainstream. JSR-305, Annotations for Software Defect Detection, is a step in the right direction. The applicability of this standard is broader than the name suggests. Here’s a talk about the JSR by Bill Pugh:
I’ve moved to Java. Here are my favorite sessions from JavaOne.
Videos require free registration. You only need to register once for all the videos.
In his talk, Model-Based Testing: Black or White?, Mark Utting discusses the difference between black-box and white-box models and their affect on the ability to automate testing.
In her talk, Signals, Truth, & Design, Judith Donath discusses intentional and unintentional signals as well as truth in signals.
In his talk, Closures for Java, Neal Gafter provides a description of and an argument for closures in Java.
Google found that when pages took only half a second longer to appear, usage of their site dropped 25%. In her talk, Scaling Google for Every User at the Seattle Conference on Scalability, Marissa Mayer communicated these results. The explanation of the statistic starts around 9 minutes into the video and takes about 3 minutes.
In his talk, Drive-By Pharming and Other WebApp Bummers, Sid Stamm discusses creative exploits.
In his talk, Inbox Zero, Merlin Mann discusses one of the most important soft skills of a knowledge worker.
Derrick Coetzee has an article about functional list processing with anonymous delegates. Keep code close to where it’s used.
Screen scraping is very brittle. It will require continual maintenance and will never be complete. A good screen scraper is like writing a parser. Extracting semantic meaning from text with a poor signal to noise ratio is non-trivial.
Send the headers and navigation to the client so they’ll see something while you’re waiting for something to finish. Perhaps the client won’t even finish downloading the headers when the render completes, you’ve gotten the first byte out faster.
Lost state is state that has been destroyed, deleted, garbage collected or otherwise removed from storage not because of some domain specified reason. In the best case because of limited storage capabilities, but because the value of the state is underestimated.