This Week at Skills Matter: 15 – 19 September

In the Brain of Allan Kelly


Monday:

We’re delighted to welcome Allan Kelly tonight, as he joins us for his In The Brain talk exploring whether Agile practices can work outside of software development. Allan is a founding consultant with Software Strategy who work with companies to implement and deepen Agile methods. Tonight he will look at examples of Agile practices beyond software to see what lessons we can garner for knowledge workers in general.

Wednesday:

Swift London return for their September talks covering Core Data, UX, UI CollectionView, APIs and Swift pedagogy. Featuring five speakers this week, these talks will cover a wide variety of topics for anyone interested in the Swift language – from all levels, beginner to expert. Register your spot now to join this friendly and very active group!

Join software developer and architect Goswin Rothenthal at this weeks F#unctional Londoners meetup, as he explores F# scripting for 3D geometry in the construction industry. Goswin’s custom software models are capable of integrating the entire design process from the initial geometry setout all the way down to the 2D information or mashinecode required for manufacturing. He recently used this process at Waagner Biro for the cladding of Louvre Abu Dhabi and before on the façades of COD-Macao and CityLife-Milano Towers at Zaha Hadid.

Thursday:

Join an adventure in F1 data, hear about the state of the Pentaho market and create Pentaho Sparks Apps with the Pentaho London user group on Thursday, as they’re joined by three exciting and experienced speakers – Nelson Sousa, Dan Keeley and Diethard Steiner.

Also on Thursday, the London Software Craftsmanship Community look at how to name things, with a solution to the hardest problem in programming by Peter Hilton. Adam Kosiński will then present an Swiss army knife for readable tests, in a talk about ideas and patterns to produce clean, readable and developer-friendly tests.

Build a Search Engine for Node.js Modules using Microservices (Part 1)

Richard Rodger

This is a guest post from Richard Rodger, CTO at nearForm & a leading Node.js specialist. He is a technology entrepreneur and wrote the seminal book on Node: Beginning Mobile Application Development in the Cloud. This is the first of a three-part blog, with the second to follow next week and the last post after his In The Brain talk at Skills Matter on the 22 September, building a Node.js search engine using the microservice architecture.

Richard will also be speaking at FullStack, the Node and JavaScript Conference on 23 October.

In this posts Richard will take us through his experience of building a search engine using microservices and how using a microservice architecture can be used to isolate features and create separate components that can be independently developed, deployed and maintained. Part one will introduce you to microservices and start you building your components and modules. Next week, we’ll cover testing the microservice!


Finding the right Node.js module is hard. With over 90,000 modules (and counting!) it can be tricky to pick the right one. You’ll often come across multiple modules that solve your problem. They are often incomplete, abandoned or just badly documented. It’s hard to make the right choice and you have to rely on rules of thumb. When was the last commit? How many stars? How credible is the author?

It would be great if there was a search engine for Node.js modules that ranked them using these heuristics. I built one in late 2012 that tried to do just that. It was a miserable failure! The quality of the results was pretty poor – see for yourself at nodezoo.com. Search engines are harder than they look, let me tell you. Recently I decided to try again. This time, I wanted to use a microservice architecture. This lets me iterate many aspects of the engine, so that it can get better over time.

This blog post is the first in a series covering nodezoo version 2.0. I’ll also be talking about this at my upcoming In The Brain talk on September 22nd. In this blog post you’ll see how a microservice architecture can be used to isolate the features of the search engine into separate components that can be independently developed, deployed, and maintained – developer heaven!

Let’s focus on the information page for each module. After you get your search results, you should be able to click-through to a page that shows you all the useful meta data about a module, such as the number of Github stars, so that you can compare it with other modules. So today we’ll build out one of the services needed to collect that information and display it.

Why Microservices?

The search engine will use a microservice architecture. Let’s take a closer look at what that means. Microservices are small, independently running processes. Instead of a single large, monolithic application running all your functionality, you break it apart into many separate pieces that communicate with each other over the network.

There are quite a few informal “definitions” of microservices out there:

…developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API
James Lewis & Martin Fowler

…a system that is built up of small, lightweight services, where each performs a single function. The services are arranged in independently deployable groups and communicate with each other via a well defined interface and work together to provide business value.
David Morgantini

…a simple application that sits around the 10-100 lines-of-code mark
James Hughes

And even a “ManifestNO”
Russell Miles

Of course, I have my own definition! It’s really the effect on humans that matters most. Microservices are just more fun. I prefer:

An independent component that can be rewritten in one week.

This gets to the heart of microservices. They are disposable code. You can throw away what you have and start again with very little impact. That makes it safe to experiment, safe to build up technical debt and safe to refactor.

What about all that network traffic, won’t it slow things down to a crawl? In practice, no. We’ve not seen this in the production systems that we have built. It comes down to two factors. First, inter-service communication grows logarithmically, not exponentially. Most services don’t need to talk to most other services.

Second, responding to any message has a relatively shallow dependency-based traversal of the service network. Huh? Microservices can work in parallel. Mostly they don’t need to wait for each other. Even when they do, there’s almost never a big traffic jam of services waiting in a chain.

In fact a microservice architecture has a very nice characteristic. Out-of-the-box you get capacity to handle lots and lots of users. You can scale easily by duplicating services which by design have no shared state. Yes, it does make each individual request a little slower. How bad! In contrast, traditional monolithic approaches give you the opposite: better performance for one user, terrible performance for many. Throughput is what counts if you want to have lots of users and microservices make a very cheap trade-off in terms of pure performance. Here’s the obligatory chart showing how this works. Hint: you want to be in the green box. The red one gets you fired.

perf

One more thing. It’s easy to accuse microservices of falling afoul of the Fallacies of Distributed Computing. Essentially pretending that the network is just like making a local function call. That’s precisely wrong. Microservices embrace the network. You are naturally led to deal with the effects of the network when you use microservices because you can’t avoid them. You have to assume that every time you talk to another component in the system you will incur network traffic. It enforces a productive discipline and clarity of mind. Architecture astronauts are quickly and severely punished.

The Search Engine

You won’t build a search engine in one blog post. Let’s look at the module information page. This collects data from multiple sources and displays it all together. Let’s take a bias towards the Availability aspect of the CAP Theorem. Whatever else happens, we want to display a web page to the user.

We’ll keep things simple. Each Node.js module has description data stored by the NPM registry. Let’s get a subset of that information and display it. Most Node.js modules also have a Github repository. It’s useful to know how many stars and forks a module has as that gives some measure of how good it is. Down the road, we might pull information from other sources, like the module author’s blog or a Google search.

So we have a set of information sources, mostly external, certainly unreliable, and definitely slow! Whatever else happens we’ve decided that we’ll always display the information page even if we have no information or only some of it. In microservice terms, you publish a message looking for information and you display whatever you get back within a certain time-frame (or some other cut-off condition). This scatter-gather pattern is explained in detail by Fred George in this video.

This acceptance of faulty behavior, this anticipation that hard choices will have to be made and results will be imperfect, is a key part of the microservice philosophy. We aim to reduce uncertainty and the frequency of failure but we never seek to eliminate it. Solutions which cannot tolerate component failure are inherently fragile. We aim to be anti-fragile.

The Architecture

There needs to be a microservice for the information page and then one for each data source. For the moment, that means one for NPM and one for Github. And we’ll also need a web server! Let’s use KrakenJS because it’s pretty cool and has easy server-side templates.

Let’s call the information page microservice nodezoo-info, the data source ones nodezoo-npm & nodezoo-github, and the KrakenJS process can be nodezoo-web. That gives you four Node.js processes to run at a minimum, or not – you’ll see in a later part of this series how to reduce this to one process when developing and debugging locally.

Now, the nodezoo-web service delivers the information page. Let’s use the URL path /info/MODULE. The controller that builds that page should ask the nodezoo-info service for the details of the module. The nodezoo-info service should publish a message to any interested services that care to provide information about modules. The nodezoo-npm and nodezoo-github services should listen for that message and respond if they can.

Sounds nice and neat, but there’s a complication. Isn’t there always! You need the Github repository URL to query Github. That information is in NPM. And yet, the Github service is not allowed to know about the NPM service. For the moment let’s assume we can have Schrödinger services that both do and do not know about other services. That allows the Github service to ask the NPM service for the repository URL. To be continued…

This is so simple that you don’t even need an architecture diagram to understand it. What? It would be “unprofessional” not to provide one you say? But microservice deployments are necessarily dynamic and fluid, changing minute-by-minute to meet the demands of user load and new business requirements. The whole point is that there is no Architecture (capital A). The best mental model is to suspend disbelief and assume all services can see all messages, ignoring those they don’t care about. Message transport is just an optimization problem.

Oh, all right then.

arch

The lovely diagram shows you something else. Each microservice owns its own data. There is no shared database. What about consistency you say? Well think about our use case. Node.js modules are not updated that frequently. We can publish messages when they are, and any services that need to update, can do so at their own pace. But module data will be out-of-date and inconsistent in some cases! Well, shucks. That’s by design. We are not going for 100% here. We’re going for fault tolerance.

Think about the error level you find acceptable, and put remedial actions in place to achieve that. Perhaps you publish the update message multiple times over the following 24 hours. It’s wasteful of resources, but covers a lot of cases where services might be down temporarily. Perhaps you have a cross-check service that randomly samples the services to see how consistent they are. Perhaps you have a set of data services and you outsource the problem. This is the style of thinking you need to adopt. Assume things will break.

Now there are many use-cases where this cavalier approach to data is not going to fly. But there are many others where it is perfectly fine. Microservices make it easy to avoid over-engineering when you don’t have to. They can all talk to the same ACID-compliant Enterprise Database Solution if you really care that much. But then you have to install and maintain the damn thing. Expensive.

The Code

Let’s look at the NPM service first as that’s pretty simple. The code in this article uses the Seneca microservice framework. That means it’s concise, you can focus on the business logic, and you don’t need to worry about message transportation. Some may balk at the idea of using a framework for microservices. After three years of building systems this way, it makes sense for us to put our collected experience into a reusable package. But you don’t need permission to go bare bones if that’s what you want to do. The basic principles are exactly the same.

The NPM service needs to do three things:

  • return data about a module, if it has that data;
  • query the NPM registry for data about a module;
  • extract relevant data from the large NPM JSON document returned for each module.

Seneca defines microservices as collections of one or more message patterns. If a message matches the pattern, then the microservice executes it. Messages are considered to be plain old JSON documents. The patterns are simple name and value pairs. If the JSON document has matching top-level properties, then it’s a match. More specific matches win. That is, a pattern with more name and value pairs beats a pattern with fewer. And that’s it.

The pattern matching is deliberately simple so that you can simulate it inside your own head. There is no fancy stuff like regular expressions or dot-notation for sub-properties. The Principle of Least Power is very – well – powerful. If you need to get fancy, match as much as you can and do the rest in code.

As noted above, all the code for this article is open-sourced in Github. It’s available under the MIT license, so you are free to re-use for your own projects, open-source or commercial.

Here’s the message action that extracts data from the NPM JSON document.

// excerpt from: nodezoo-npm/npm.js

 seneca.add(
   'role:npm,cmd:extract',
   {
     data: { required$:true, object$:true },
   },
   cmd_extract)

 function cmd_extract( args, done ) {
   var seneca  = this

   var data       = args.data
   var dist_tags  = data['dist-tags'] || {}
   var latest     = ((data.versions||{})[dist_tags.latest]) || {}
   var repository = latest.repository || {}

   var out = {
     name:    data._id,
     version: dist_tags.latest,
     giturl:  repository.url
   }

   done(null,out)
 }

The message action pattern is role:npm, cmd:extract. This is completely arbitrary. The role property namespaces the pattern, and the cmd property names what it does. Not all patterns are commands, some are just announcements for general interest, as you’ll see in later parts.

The implementation of the pattern, the action that it triggers, is placed in the cmd_extract function. This is just good code organization. It also means you can list all the patterns at the top of the source code file, which gives you a little bit of built-in documentation.

The seneca.add method adds a new message pattern. You supply the pattern and its implementing function. Optionally you can specify the arguments you need. These are additional properties that should be present in the message JSON document. In this case, a data property must be present, and it’s value must be an object. That is, the data property should contain the the NPM JSON result. For more details of the argument validation format, see the parambulator Node.js module.

The action implementation follows the standard Node.js callback pattern:

function some_action( args, done ) {
  // context-specific seneca instance – gives you nice logs!
  var seneca = this

  ...

  // there is no error, so the first argument is null
  done(null,out)
}

The args object is the message document. This what you act on. The done function is the callback to call when you are finished. If there was an error, you should pass an Error object as the first argument to done. Otherwise, pass a null, and then your result as the second. Returning a result is not required. Some messages are fire-and-forget, and there is no result, as such. Always call done though, even if it’s just done(); as you need to signal that you’re finished processing.

The extraction logic is nothing special, it just picks values out of an object data structure. To see the full structure, try this for example.

Data Storage

Microservices do need to store data occasionally. If you want to be puritanical, then data storage should also be a microservice! Data operations are just more messages. A load message maybe, a save message, a remove message, and so forth. Purity is overrated. Seneca does do things this way, but it provides a nice and easy ActiveRecord-style interface for data entities, so you don’t have to worry about it.

This gives you all sorts of lovely side-benefits. Choose this database! No, that one! Change database in mid-project. Who cares! Need to cache data in redis or memcached? Intercept the messages! No need for the business logic code to know anything about it. Need special case validation or data munging? Intercept and do what you want!

I’ll stop now.

Here’s what the API looks like:

  • Create a new data entity: var color = seneca.make$(“color”)
  • Save some data: color.name = “red”; color.save$( callback )
  • Load some data: color.load$( {name:”red”}, callback )

For more details, see the Seneca Data Entities documentation.

For the moment, you’ll store all data as plain text JSON files on disk. Yes, plain old JSON. On disk. No database needed. Later on, in production, we can worry about LevelDB or Mongo or Postgres (no code changes required! I promise). For now, let’s make local development and debugging super easy.

OK, let’s respond to a request for information about a Node.js module:

  seneca.add(
    'role:npm,cmd:get',
    {
      name:   { required$:true, string$:true },
    },
    cmd_get)

  function cmd_get( args, done ) {
    var seneca  = this
    var npm_ent = seneca.make$('npm')

    var npm_name = args.name

    npm_ent.load$( npm_name, function(err,npm){
      if( err ) return done(err);

      // npm is not null! there was data! return it!
      if( npm ) {
        return done(null,npm);
      }

      // query the npmjs registry as we
      // don't know about this module yet
      else {
        seneca.act(
          'role:npm,cmd:query',
          {name:npm_name},
          done)
      }
    })
  }

This implements the role:npm,cmd:get pattern. You need to provide a name property in your message. Speaking of which, here’s an example message that will work:

{
  "role": "npm",
  "cmd":  "get",
  "name": "express"
}

The code attempts to load a module description with the given name. The npm entity stores this data. If it is found, well then just return it! Job done. The caller gets back a JSON document with data fields as per the role:npm,cmd:extract action.

If the module was not found in the data store, then you need to query the NPM registry. Submit a new message with pattern role:npm,cmd:query. That’s that action that does that job. Return whatever it returns.

Finally, let’s look at the query action. This needs to make a HTTP call out the the registry.npmjs.org site to pull down the module JSON description. The best module to do this is request.

  seneca.add(
    'role:npm,cmd:query',
    {
      name: { required$:true, string$:true },
    },
    cmd_query)

  function cmd_query( args, done ) {
    var seneca  = this
    var npm_ent = seneca.make$('npm')

    var npm_name = args.name

    var url = options.registry+npm_name
    request.get( url, function(err,res,body){
      if(err) return done(err);

      var data = JSON.parse(body)

      seneca.act('role:npm,cmd:extract',{data:data},function(err,data){
        if(err) return done(err)

        npm_ent.load$(npm_name, function(err,npm){
          if( err ) return done(err);

          // already have this module
          // update it with .data$ and then save with .save$
          if( npm ) {
            return npm.data$(data).save$(done);
          }

          // a new module! manually set the id, and then
          // make a new unsaved entity with .make$,
          // set the data and save
          else {
            data.id$ = npm_name
            npm_ent.make$(data).save$(done);
          }
        })

      })
    })
  }

This action constructs the query URL and performs a HTTP GET against it. The resulting document is then parsed by the role:npm,cmd:extract action to pull out the properties of interest and merged into any existing data on that module, or else saved as a new data entity. The data.id$ = npm_name line is where you manually set the identifier you want to use for a new data entity.

And that’s all this microservice does. 112 lines of code. Not bad.


Check back next week for part two, when we’ll cover testing the microservice. Can’t wait until then? Let us know your thoughts, ideas and experiences with microservices below, or give us a shout on Twitter!

FullStack

Clojure eXchange: Call for Papers

CLOJURE14

The Call for Papers is now open for the 2014 Clojure eXchange

Now entering it’s 4th year, the annual Clojure eXchange brings together experts and enthusiasts from the world of Clojure to explore one of the fastest growing functional languages.

This is not just an opportunity to come and listen to experts for 2-days, but a chance to fully immerse yourself in all things Clojure – and we’re looking for passionate, talented and brilliant Clojure virtuoso’s to join our line-up.

As with our other conferences, we aim to have between 25-50% of talks targeting newcomers to Clojure, Lisp and the JVM. The types of talks we’re looking for include:

  • Clojure in startups
  • Clojure in enterprise
  • embedded projects
  • experience reports
  • robots, art, music, humanities, games, social science…

You can choose to either present a full-length 40 minute talk, or go for a shorter 10 minute lightning presentation. Talks don’t necessarily have to be about Clojure, but must be interesting to Clojurians!

If this sounds like you, head over to the CfP page now. If you have any questions then please contact us clojurex@skillsmatter.com.

The CfP will close on Friday 31st October, and successful presenters will be contacted by the week commencing 10th November.

One Model In, One Model Out

jimmy-bogard-skills-matter-blog

This is a guest post from Jimmy Bogard, a member of the ASPInsiders group, the C# Insiders group, as well as being the “Microsoft Most Valuable Professional” (MVP) for ASP.NET since 2009. He is also the creator and lead developer of the popular OSS library AutoMapper.

Jimmy will be speaking at Skills Matter on 3rd September at his In the Brain talk on Object-Relational Mapper’s. The talk will look at application patterns, ORM features and more to show the correct way of using your ORM, and when to ditch your heavyweight ORM and reach for a lighter one. You can register for this free talk here.

In this post, Jimmy discusses mediator patterns, and how we can use them to ensure that our behaviours become less driven by a framework – allowing us to focus on the real work being done.


In most of the projects I’ve worked on in the last several years, I’ve put in place a mediator to manage the delivery of messages to handlers. I’ve covered the motivation behind such a pattern in the past, where it works well and where it doesn’t.

One of the advantages behind the mediator pattern is that it allows the application code to define a pipeline of activities for requests, as opposed to embedding this pipeline in other frameworks such as Rails, node.js, ASP.NET Web API and so on. These frameworks have many other concerns going on besides the very simple “one model in, one model out” pattern that so greatly simplifies conceptualizing the system and realizing more powerful patterns.

As a review, a mediator encapsulates how a series of objects interact. Our mediator looks like:

public interface IMediator
{
    TResponse Send(IRequest request);
    Task SendAsync(IAsyncRequest request);
    void Publish<TNotification>(TNotification notification) where TNotification : INotification;
    Task PublishAsync<TNotification>(TNotification notification) where TNotification : IAsyncNotification;
}

This is from a simple library (MediatR) I created (and borrowed heavily from others) that enables basic message passing. It facilitates loose coupling between how a series of objects interact. And like many OO patterns, it exists because of missing features in the language. In other functional languages, passing messages to handlers is accomplished with features like pattern matching.

Our handler interface represents the ability to take an input, perform work, and return some output:

public interface IRequestHandlerTRequest, out TResponse>
    where TRequest : IRequest
{
    TResponse Handle(TRequest message);
}

With this simple pattern, we encapsulate the work being done to transform input to output in a single method. Any complexities around this work are encapsulated, and any refactorings are isolated to this one method. As systems become more complex, isolating side-effects becomes critical for maintaining overall speed of delivery and minimizing risk.

We still have the need for cross-cutting concerns, and we’d rather not pollute our handlers with this work.

These surrounding behaviours become implementations of the decorator pattern. Since we have a uniform interface of inputs and outputs, building decorators around cross-cutting concerns becomes trivial.

Pre- and post-request handlers

One common request I see is to do work on the requests coming in, or post-process the request on the way out. We can define some interfaces around this:

public interface IPreRequestHandlerTRequest> {
    void Handle(TRequest request);
}
public interface IPostRequestHandlerTRequest, in TResponse> {
    void Handle(TRequest request, TResponse response);
}

With this, we can modify inputs before they arrive to the main handler or modify responses on the way out.

In order to execute these handlers, we just need to define a decorator around our main handler:

public class MediatorPipeline<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest {

    private readonly IRequestHandler<trequest, tresponse=""> _inner;
    private readonly IPreRequestHandler[] _preRequestHandlers;
    private readonly IPostRequestHandler<trequest, tresponse="">[] _postRequestHandlers;

    public MediatorPipeline(
        IRequestHandler<TRequest, TResponse> inner,
        IPreRequestHandler<TRequest>[] preRequestHandlers,
        IPostRequestHandler<TRequest, TResponse>[] postRequestHandlers
        ) {
        _inner = inner;
        _preRequestHandlers = preRequestHandlers;
        _postRequestHandlers = postRequestHandlers;
    }

    public TResponse Handle(TRequest message) {

        foreach (var preRequestHandler in _preRequestHandlers) {
            preRequestHandler.Handle(message);
        }

        var result = _inner.Handle(message);

        foreach (var postRequestHandler in _postRequestHandlers) {
            postRequestHandler.Handle(message, result);
        }

        return result;
    }
}

And if we’re using a modern IoC container (StructureMap in this case), registering our decorator is as simple as:

cfg.For(typeof (IRequestHandler<,>))
   .DecorateAllWith(typeof (MediatorPipeline<,>));

When our mediator builds out the handler, it delegates to our container to do so. Our container builds the inner handler, then surrounds the handler with additional work. If this seems familiar, many modern web frameworks like koa include a similar construct using continuation passing to define a pipeline for requests. However, since our pipeline is defined in our application layer, we don’t have to deal with things like HTTP headers, content negotiation and so on.

Validation

Most validation frameworks I use validate against a type, whether it’s validation with attributes or delegated validation to a handler. With Fluent Validation, we get a very simple interface representing validating an input:

public interface IValidator<in T> {
    ValidationResult Validate(T instance);
}

Fluent Validation defines base classes for validators for a variety of scenarios:

public class CreateCustomerValidator: AbstractValidator {
  public CreateCustomerValidator() {
    RuleFor(customer => customer.Surname).NotEmpty();
    RuleFor(customer => customer.Forename).NotEmpty().WithMessage("Please specify a first name");
    RuleFor(customer => customer.Discount).NotEqual(0).When(customer => customer.HasDiscount);
    RuleFor(customer => customer.Address).Length(20, 250);
    RuleFor(customer => customer.Postcode).Must(BeAValidPostcode).WithMessage("Please specify a valid postcode");
  }

  private bool BeAValidPostcode(string postcode) {
    // custom postcode validating logic goes here
  }
}

We can then plug our validation to the pipeline as occurring before the main work to be done:

public class ValidatorHandler<TRequest, TResponse>
    : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest {

    private readonly IRequestHandler<trequest, tresponse=""> _inner;
    private readonly IValidator[] _validators;
    
    public ValidatorHandler(IRequestHandler<trequest, tresponse=""> inner,
        IValidator[] validators) {
        _inner = inner;
        _validators = validators;
    }

   public TResponse Handle(TRequest request) {
        var context = new ValidationContext(message);

        var failures = _validators
            .Select(v => v.Validate(context))
            .SelectMany(result => result.Errors)
            .Where(f => f != null)
            .ToList();

        if (failures.Any())
            throw new ValidationException(failures);

        return _inner.Handle(request);
   }
}

In our validation handler, we perform validation against Fluent Validation by loading up all of the matching validators. Because we have generic variance in C#, we can rely on the container to inject all validators for all matching types (base classes and interfaces). Having validators around messages means we can remove validation from our entities, and into contextual actions from a task-oriented UI.

Framework-less pipeline

We can now push a number of concerns into our application code instead of embedded as framework extensions. This includes things like:

  • Validation
  • Pre/post processing
  • Authorization
  • Logging
  • Auditing
  • Event dispatching
  • Notifications
  • Unit of work/transactions

Pretty much anything you’d consider to use a Filter in ASP.NET or Rails that’s more concerned with application-level behaviour and not framework/transport specific concerns would work as a decorator in our handlers.

Once we have this approach set up, we can define our application pipeline as a series of decorators around handlers:

var handlerType = cfg.For(typeof (IRequestHandler<,>));

handlerType.DecorateAllWith(typeof (LoggingHandler<,>));
handlerType.DecorateAllWith(typeof (AuthorizationHandler<,>));
handlerType.DecorateAllWith(typeof (ValidatorHandler<,>));
handlerType.DecorateAllWith(typeof (PipelineHandler<,>));

Since this code is not dependent on frameworks or HTTP requests, it’s easy for us to build up a request, send it through the pipeline, and verify a response:

var handler = container.GetInstance<IHandler<CreateCustomer>>();

var request = new CreateCustomer {
    Name = "Bob"
};

var response = handler.Handle(request);

response.CreatedCustomer.Name.ShouldBe(request.Name);

Or if we just want one handler, we can test that one implementation in isolation, it’s really up to us.

By focusing on a uniform interface of one model in, one model out, we can define a series of patterns on top of that single interface for a variety of cross-cutting concerns. Our behaviours become less coupled on a framework and more focused on the real work being done.

All of this would be a bit easier if the underlying language supported this behaviour. Since many don’t, we rely instead of translating these functional paradigms to OO patterns with IoC containers containing our glue.


In the Brain of Jimmy Bogard: ORMs – You’re Doing it Wrong

On 3rd September, Jimmy discusses Object-Relational Mapper’s at Skills Matter HQ. If you want to make your DBA (and your users happy), you’ll want to use your ORM intelligently – come along to find out how!

Register for your free ticket now!

While It’s Compiling: Skills Matter interviews Karolina Szczur

Karolina Szczur is a designer and developer at &yet and contributes to various open source projects such as NodeCopter, AmpersandJS and Otalk.

We caught up with Karolina ahead of her talk at Fullstack: The Node and JavaScript Conference where she will be looking at up-to-date front-end tooling and analysing alternative approaches to compiling, building and automation processes. Here she gives us an insight into her talk topic, the iterative nature of design and her views and thoughts on the Open Source community.

karolina-szczur


Your recent work includes roles such as the UX Lead at Nodejitsu, writer and editor at Smashing Magazine for the mobile, coding & design sections, and Front-end Developer/Design Lead at XHTMLized. However, you started out doing photography, via psychology, anthropology and the history of art among other things – what first drew you towards development and design?

Honestly, I don’t recall a specific aha moment. I’ve always been drawn to art and photography—we used to have a darkroom at home when I was younger. As soon as digital photography went somewhat mainstream (or at least it became more accessible to the layperson) I started my adventure with editing in Photoshop 5 (yes, it was the 90s). That quickly led to trying to do more than adjust the colours or exposure. It was at the time when GeoCities was at its peak, so my photography passion escalated to “What if I could have a website too?”. I’d say it was rather natural curiosity and organic growth of interest in interconnected fields.

At the end of a recent article you wrote that design “is by nature an iterative process”, and that contributions from others is essential. Considering your experience in front-end design, can you take us a little further into this iterative process?

It takes an extraordinary amount of craftsmanship to create something that’s in any way timeless. Great design and front-end don’t happen instantly. Research, prototyping, implementation and testing take time. But trying to chase perfection before launching only prevents shipping. Like many designers, I’ve struggled with attempting to achieve some arbitrary ideal, but that kind of thinking doesn’t foster creativity.

Content we design for often changes, sometimes drastically. Web standards evolve. We can’t rely on predictability of either the medium or the technologies we have at hand. Working on the web’s rapidly evolving platform means our work will always be an iterative process, whether or not we choose that. There will always be something to improve or fix and we have to be ready to embrace the absolute lack of perfection.

There has been a lot of discussion lately surrounding the importance of shipping—iteration is empowering because it removes the boundaries of judgemental and perfectionist ideals. Collaboration and contributions from others take this idea even further by allowing room for constant feedback, broader perspective, and fresh ideas.

When working on a project, how do you ensure you maintain ownership whilst still benefiting from the input of others?

Ownership is essential. It doesn’t even have to be designated (“This person is the owner”). Often I see individuals stepping up as leaders. In a way, every single person involved in a certain project is an owner—they are all responsible for its execution and success. When everyone feels like that instead of irrelevant cog in a giant machine, feedback happens naturally.

Building up fearless candor and honest feedback requires zero judgmentalism and allowing others to step up. Making everyone a leader doesn’t take ownership away—it helps us to be better when working towards a mutual goal.

From a practical standpoint: ask for specific input as often as you can and enable anyone to express their opinions if they are willing to do so. Kill bikeshedding.

In your upcoming talk at FullStack you’re looking at the dangers of introducing too many tools in the development process, making everything more complex and time-intensive. How can you tell when your work-flow is as optimised as it’s going to get?

The topic of tooling is as opinionated as debating the superiority of programming languages or promises over callbacks. Automation and optimisation are crucial, both for complex, bigger projects and the small ones—they let us focus on tasks that simply cannot be automated and bring desired levels of uniformity.

The questions I like to ask myself are: Can it be faster? Can I use less tools and more natively available utilities? And last but not least—is it understandable and simple enough to empower my team members and collaborators? I believe that all tools are ultimately created to make our lives easier, thus making sure that it doesn’t introduce cognitive overhead for others is crucial. If it’s standardised, fast, reliable and comprehensible for others—I know I’m home.

It seems that a lot of this comes from the work you’ve been doing on standardisation of front-end tooling at &yet. Can you tell us a little bit more about &yet, and the work you’re doing there?

I’ve been working at &yet for over a year now—it’s been a very humbling experience that allowed me to grow both as a professional and as a person. I’ve been mostly focusing on internal projects, be it our own products or processes for optimising design and development.

In the last months I’ve been busy with a larger endeavour I like to call front-end standardisation. With a hugely multidisciplinary team, where almost everyone can code at some level, it’s important to create an environment that not only enables learning good patterns but also fosters code consistency:

“All code in any code-base should look like a single person typed it, no matter how many people contributed.”

These efforts been mostly focused on creating a code guide and pattern library in a form of small, modular CSS framework. It’s definitely a very challenging project and requires a lot of collaboration and feedback from team members. Standards work is never one person creating arbitrary rules to follow, but an collective effort for reaching consensus of optimal approaches suitable for given teams.

Next adventure? Improvements to our WebRTC-based video chat app— Talky.

&yet describes itself as an Open Web Company, and on your own website you say that you strongly believe in Open Source. What makes Open Source so important in your opinion?

Open Source was always at the core of our interests—we’re strongly involved in the Node, XMPP, WebRTC, and JavaScript communities. We support people who write and maintain libraries that we use on Gittip and we contribute and share everything we can.

It aligns perfectly with our people first approach to everything we do. After all, writing software or designing is just a means to an end. If we can enable others to make the world a better place even slightly by open sourcing our work or paying others to do so, then we will. We are building these things and working on standards for the good of all of us. If that doesn’t make open source important then I have no idea what does.

karolina-szczur-blog-post

&yet have been involved in creating a few community events themselves, and you’ve spoken at conferences such as JSConf EU, the Future of Web Design in Prague and the Frontend Conference in Zurich, amongst many others. How important is the community to the future of Open Source?

The dictionary definition of Open Source focuses on the openness and public availability of the source code. I want to go a little bit further—without people, Open Source dies. With collaboration and contributions projects flourish.

I don’t want to undermine the quality or livelihood of one-person projects (there’s plenty of those on npm) but we’ve learned the need for a community with our recent endeavour, AmpersandJS. We got quick traction and many contributors—one of them already added to the core team. One day we were debating the accessibility of the role attribute and got almost 70 comments from people deeply involved in that matter. We wouldn’t be able to improve the project and push it forward so fast without the help of the community.

Maintaining an Open Source project is a job—it takes a tremendous amount of time and, in a semi-direct way, money. Finding and even fixing bugs is so much harder without others using the software in the wild. I can’t see the future of Open Source without the community surrounding it.

Considering the conferences you’ve been involved in have taken place all over the world, do you see yourself as part of an intrinsically global community, or has your experience been that developers’ goals and missions change depending on where they are?

The idea of global community still feels a little bit like a dream to me. There are a lot of subcommunities, which on one hand is completely natural (the design community, the JavaScript community or the Ruby community) but when those communities become too exclusive and the barriers to entry become discouraging for newcomers it’s hard to talk about a global population of people involved in tech. There’s not only division in specific fields of interest or programming languages but also cultural differences—The Bay Area is so much different from the fairly small town of Kraków where I live, so I can see how priorities could shift depending on the location.

Nonetheless what I would love to see is less judgmentalism, more openness towards beginners as well as inclusivity towards people of colour, women, and other minorities, including people for whom English is not their first language. I certainly don’t feel excluded nor do I try to categorise myself as a part of a certain group, but until we all feel fairly comfortable to say that I can’t see a healthy, global community, there’s still work to do to make everyone feel welcome.


While It’s Compiling is a continuing series of interviews with experts across a range of bleeding-edge technologies and practices, exclusive to Skills Matter. Be sure to subscribe to this blog for future interviews, or follow us on Twitter.

Find out who we’ll be interviewing next, and get a chance to put your questions forward with the hashtag #whileitscompiling.

fullstack-2014-1000x150px