Brain Yoga with Elixir

Some say that life is too short to do everything, on the contrary, it’s also too short to stay in one place and not explore. Especially for software developers, there are so many ecosystems to explore, like node (aka JavaScript), JVM (aka. Java, Groovy, and friends), and CLI (aka, .net, C#) to name a few… from the mainstream.
But when we zoom out, all of them are kind of the same! All use C-like syntax, all are stack-based, all are primarily mutable and for the most part, solve problems similarly. So even if you explore one of them you more or less stay in the same mainstream paradigm. On the one hand, that’s good as you get up to speed faster and you can tap into your experience from the other ecosystem.

Arguably, the most growth lies beyond the mainstream, where you must bend your mind and sometimes even question your previous experience.

The question here is if you want to bend and squeeze the tools you know (like Facebook did with PHP)… Or would you rather prefer to challenge your brain and learn the proper tool for the task? You can answer this for yourself, in your own time. In this blog post, I’ll give you a few highlights from my mind-bending journey into the land of Elixir.

Let’s start with “why”… Why Elixir?

Have you ever heard Java’s initial catchphrase? “Write once, run everywhere”, neither it says anything about “running forever” nor about “running efficiently” or “run on a cluster” either. But it’s now used for all of the possible use cases, from desktop apps to cloud systems.
Whereas Elixir runs on BEAM VM, which also powers Erlang. Which was created by Ericsson with the sole purpose of running telephone centers. It was designed to run forever in clustered mode, with failovers built-in and task separation to limit the scope of failures (that will inevitably happen).
What Elixir adds on top is a “syntactic sugar” with Ruby-like syntax that is easier to learn and understand compared to Erlang. Additionally, it also removes a lot of boilerplate code compared to Erlang.

When do you want to consider Elixir then? When the thing you’re going to build runs 24/7/365 with error isolation, online upgrades, scalability, and performance… Sounds familiar? Or is my explanation still too cloudy?

Pose 1: Pattern matching

Let’s start from the basics: assignment operation. You may wonder how this could be mind-bending… Stay with me 😉

Probably the first thing we learn when we start our journey with coding is assigning variables. After initial struggles, it becomes second nature to assign a variable. In JavaScript, Java, C#, or any other C-like language it’s always:

const myVariable = 6

or

final String hello = "hello"

No surprise here, in this (simple) case it’s also how it works in Elixir, and it’s even less typing:

myVariable = 6
hello = "hello"

Easy, right?

Let’s raise the bar a bit higher and move to destruction. If you wrote any (modern) React code, for sure you used hooks. So to declare a mutable state you’d do:

const [value, setValue] = React.useState(42)

Let’s jump for a moment to Go-lang, as you may know, it doesn’t have exceptions, instead, each function will return a tuple with the first element being a result (or nil) and the second, being an error (or nil).

Elixir has tuples also, but they usually use :ok, and :error as the first elements to distinguish between success and error. With this in mind…

{:ok, file} = File.read("./existing_file")

above code should be self-explanatory. We’re reading file content from the disk and storing it in file variable. But what will happen when the file doesn’t exist? You may recall that File.read will return :error as the first element of the tuple. But what happens then? Let’s try

{:ok, file} = File.read("./NOT_existing_file")
** (MatchError) no match of right hand side value: {:error, :enoent}
    (stdlib 4.2) erl_eval.erl:496: :erl_eval.expr/6
    iex:1: (file)

MatchError, “assignment” has failed! As this is not an assignment it’s a pattern match!  We can “easily fix this” with:

{:error, reason} = File.read("./NOT_existing_file")

Now, this code will not produce the MatchError.

What’s more, you can use pattern matching and destruction in the function parameters! That’s even the preferred way of writing “conditional” logic. Let’s examine this by implementing a sign function. To recap, the sign function returns -1 for all arguments lower than 0, 0 when an argument is 0, and 1 when it’s greater than zero. So in JS, it would look like this

const sign = (arg) => {
  if (arg > 0) {
    return 1
  else if (arg < 0) {
    return -1
  } else {
    return 0
  }
}

Easy! Let’s now rewrite it in Elixir

defmodule Example do
  def sign(0), do: 0
  def sign(arg) when arg > 0, do: 1
  def sign(arg) when arg < 0, do: -1
end
 
> Example.sign(11)
1
> Example.sign(-89)
-1
> Example.sign(0)
0

Mind blown? We’re “overriding” or “shadowing” sign function with different pattern matches and guards (the “when” keyword is called a guard).

This mechanism is really powerful, as you can pattern-match and destruct arguments in one go. You don’t need to have a complicated if-elseif-else block, instead, you get (powerful) pattern matches. It takes some time to get used to it… but when you do, you miss that in other languages, as the function body (with all that matching) is short and easy to comprehend.

Pose 2: Let’s talk about scopes!

We all know how variable scoping works. Even with fu*ked-up scoping in JS, over time we learned how to use it to our advantage. Let’s engine that simple JS code then:

const sumUp = (input) => {
   var acc = 0
   input.forEach((i) => acc = acc + i)
   return acc
}

Now let’s rewrite it in Elixir

defmodule Example do
  def sum_up(input) do
    acc = 0
    Enum.each(input, fn i -> acc = acc + i end)
    acc
  end
end
 
> Example.sum_up([1, 2, 3])
0

If we try to use the JS version it will gladly sum up all of the elements of the array. But the Elixir version will always return 0… Why?!

Elixir is a bit special with variable scoping. Meaning that you can read variables from parent scopes, but if you try to override any of them, the overridden value will be only visible from that point below. Meaning it will never override values in parent scopes! That’s why we always get 0 from our Elixir implementation. BTW will you be able to rewrite the Elixir version in a way that it will behave like JS? Could you post your version in the comments?

Pose 3: Managing immutable state

Elixir is immutable by default. All data structures are immutable, the only way to change them is to return a new value from a function. Although all that we do in programming is just data processing, meaning data is coming in we do our magic (aka mutate it) and data is coming out. But we still need to have some mutable state, for instance, to compute and hold a cart value for users in an e-commerce app. But when everything is immutable we can’t do that, right?
Yes, that’s right, but there is a GenServer interface that allows us to mutate immutable data (as counterintuitive as it sounds).

Let’s see how we can implement a counter with GenServer

defmodule Counter do
  use GenServer
 
  def start_link() do
    GenServer.start_link(__MODULE__, 0, name: :counter_example)
  end
 
  @impl true
  def handle_call(:value, _from, state) do
    {:reply, state, state}
  end
 
  @impl true
  def handle_call(:increment, _from, state) do
    state = state + 1
    {:reply, state, state}
  end
 
  @impl true
  def handle_cast(:increment, state) do
    # state = state + 1
    {:noreply, state + 1}
  end
 
  def value() do
    GenServer.call(:counter_example, :value)
  end
 
  def increment_sync() do
    GenServer.call(:counter_example, :increment)
  end
 
  def increment_async() do
    GenServer.cast(:counter_example, :increment)
  end
end
 
> Counter.start_link()
{:ok, #PID<0.150.0>}
> Counter.value()
0
> Counter.increment_sync()
1
> Counter.increment_async()
:ok
> Counter.increment_async()
:ok
> Counter.value()
3

We used an integer to keep track of the state. But it can be any data structure like a list, map, struct etc. From the @impl annotation, you may guess that those are implementations of GenServer API and you’re right. The value, increment_sync, and increment_async are so-called interface functions to hide the usage of GenServer. The GenServer.call and GenServer.cast functions take two arguments, the first being a process PID or its name and the second is a message that is being sent to that process. So we’re sending messages (aka actions) to our implementation of handle_call and handle_cast. Both function implementations are pure, meaning that they don’t modify anything, they just return the next valid state for that process.

Talking about the state management, actions, and maps… Doesn’t that GenServer look somehow familiar? Have you heard about a library that uses actions (plus reduces), and a map to keep track of state? … Redux?

Pose 4: Everything is a process

I’ve already mentioned processes while explaining the Counter example. But before we dive deeper, let’s have a quick recap.

We all know that each application in the system is backed by (usually) one or more processes. You may have heard of “fork”s, which would create a copy of the current process to run concurrently, the thing with forks is that they are slow to create and also expensive, as it is a full clone of a process, that includes the memory, file descriptors, etc. What’s more, there is no simple way to communicate between forks.

Then we have threads, which are an improvement over the fork model, as they share the memory with the parent and are created within the process itself. But as the memory is shared, it’s also easier to communicate between threads, as they can just modify the shared memory… and that brings us to the famous concurrency bugs that are hard to reproduce and fix.

Can we do better?

The inventors of BEAM VM thought that we could… Similarly to JVM the BEAM VM will allocate a single chunk of memory for itself, and also it will manage its own internal processes and communication between them.

To make things “simpler”, an internal process in the BEAM VM is called… process. Each process takes ~2 KB of memory (on a 64-bit system). The only way to communicate between processes is to send immutable messages. That means that no memory is shared and each process can consume messages (and send responses) in its own time and not worry that its state is being modified by another process.

Sounds easy and pretty much standard? Wait, there is a twist!

Processes are the main building blocks of Elixir (and Erlang) applications, they are also organized in so-called “supervision trees”. There are two types of nodes in a supervision tree: worker and supervisor. The worker will do the work, but the responsibility of the supervisor is to keep track of the workers. When a worker fails (crashes etc) then (depending on a policy) the supervisor can restart that single process or restart all processes in that node.

This means that a single problematic input data will not bring down your whole app! Errors are scoped to that single process and its supervisor. If the process keeps failing after a few restarts, the supervisor will give up and won’t even try to start it again.

Summary

If any of the above looked odd or unintuitive to you, I would highly recommend giving Elixir a try! Why do you ask? As with the similarities between GenServer and Redux, you may find similar ideas being used in different ecosystems. It then makes it easier to grasp them. A good starting point is the Getting started guide and the book Elixir in Action by Saša Jurić.

In case I’ve got your interest in Elixir, you’ll probably be glad that there’s much much more awesomeness in it and BEAM. Like it is by design cluster-able, there is also a “telemetry” module to monitor your instance(s) and integrate with, hot code deployments, REPL (or interactive shell), mix (build tool and dependency manager) etc.

Hope you learned something interesting.

Gerrit Hackathon at Google HQ… next one is coming

Gerrit Hackathon 2015Gerrit Hackathon 2015

As always after the Gerrit User Summit, a Gerrit hackathon took place.
This time it was a five days event (9-11 November 2015), where members of the Gerrit community could work together, fully focused on making Gerrit a better software.
2015 edition gathered 15 participants from various companies like Google, SAP, Sony Mobile, Qualcomm, OpenStack, Axis Communications, Gerritforge and of course CollabNet.
Hackathons are really intensive periods of time for Gerrit project: over 400 patches were merged, three releases (2.11.5 and two release candidates of 2.12) were performed, countless number of open changes and patchsets were pushed for review.
This blog post summarizes work done during that period of time, showcasing new features upcomming in 2.13 and 3.0 release.

Gerrit metrics

Gerrit Metrics in Grafana2Gerrit Metrics in Grafana2

If you are responsible for running mission critical software for your organization, you must know how important monitoring and metrics are. How important it is to get fine grained information about the application performance. It is simply not enough to know whether it is up and running but also what is the overall shape of it.
This kind of information is especially critical when users start complaining “Gerrit is slow”.
From time to time such complaints arrive also to our team in Potsdam, then we use Splunk to analyze the load based on Gerrit logs and give our recommendations how to tweak Gerrit. Based on such cases our Gerrit Performance Cheatsheet was composed meuh7iu.
Starting from Gerrit 2.13 we will have a new tool in our toolbox! Internal Gerrit metrics!
DropWizard Metrics library is used as internal engine. Gerrit exposes over 1300 metrics about crucial internals e.g. http server response time, git receive pack, git counting objects, cache sizes, etc….
What is even more awesome, plugins can report their own metrics using the core API. This way replication plugin for 2.13 will report time taken to replicate repository data to various locations.
One thing is to collect metrics, the other is to store them. For this purpose three new plugins were created: metrics-reporter-elasticsearch, metrics-reporter-graphite and metrics-reporter-jmx. This gives possibility to plug Gerrit in into already existing infrastructure.

Hooks as plugins for core events

There are two ways in Gerrit how one can be notified about git operation related events. One is via event mechanism and another via Gerrit hooks. Both provide almost identical functionality making deciding on implementing one of them harder.

During our hackathon work was started to extract hooks mechanism into plugins that would listen to core Gerrit events.

This work is still ongoing, but once it gets finished one that want to run server side hooks must install the Gerrit hook plugin.

gwtorm can be used from plugins

You may be wondering what is the gwtorm. This is a library written for Gerrit project to access relational databases. It is a lightweight method of connecting your Java application to multiple different DB backends. Initially it was meant to be used only by GWT based applications (hence gwt prefix in its name), but currently it can be used by any Java application.

Why to use gwtorm in plugin? Well if you don’t want to modify Gerrit schema (which is highly discouraged) to store your plugin data and want to support many SQL dialects out of the box, gwtorm is the way to go.

The first plugin that will use this library is gerrit-ci-plugin.

Gerrit 2.11.5 and 2.12-rc

Gerrit releases don’t happen too often. Some time we had to wait long months (and over 1000 commits) to get new stable version of Gerrit. Usually just before the hackathon a release candidate of new stable version is cut from the master branch.

During this year’s hackathon we got three releases! One was a service release for 2.11 (updated release notes) branch containing fixes for javascript clipboard, styling, commit validation error handling.

Apart from service release two release candidates were published for Gerrit 2.12.

Submit whole topic dialog

Gerrit 2.12 changes how patches are submitted to the repository after code review. In all previous versions there was so called “merge queue” which was responsible for submitting patches in the right order. If particular change was submitted but its ancestors were still under review it ended up in a special state “submitted, merge pending”.

In 2.12 changes arranged in a branch (one change depending on the other) can be submitted at once by single click on the submit button on the topmost change.

Additionally new feature called ‘submit whole topic’ was added. It enables submission of changes that share the same topic. This can be done across multiple projects and branches.

One thing that struck us when this feature was presented during Gerrit User Summit was change in the semantic of Gerrit ‘topic’. Before 2.12 topics were only metadata that could have been freely added and removed. Plus there was possibility to search for changes that share the same topic. Starting from  2.12 setting a topic on changes will change how they are submitted. In some rare cases one can submit changes of others or block them because of a change not visible to all is still waiting for being reviewed.

To make submitting more verbose during the transition period, a submit dialog was proposed. It pops up after clicking on the submit button only when changes from the same topic would be submitted. It presents the list of changes submitted in a topic and without it, so that the submitter can choose whether to just merge the change in question or all changes of the same topic.

CI verification

Some time ago a Diffy build bot was introduced to verify changes pushed to gerrit-review.googlesource.com, but after some time it become unreliable, often was simply not verifying because it was not running.
Now there is new verifier in the picture. Based on proven gerrit-trigger-plugin custom REST API pooling strategy and Jenkins tandem. It is kindly hosted by the GerritForge. Long life to new GerritForge CI bot!

NoteDB

NoteDB is the defining feature of Gerrit 3.0. It will replace “conventional” database system and store everything inside git repositories. All the data that is currently stored in SQL DB will be moved to git repositories. Review related information will be stored in the particular repository using git-notes and special refs. User data will be moved to dedicated repository.
During the hackathon further steps into achieving the goal of removing the dependency from SQL DB were performed, some integration tests were fixed plus NoteDB tests were enabled  as a part of verification job.

New Gerrit UI with Polymer

Last but not least, the new Polymer based WEB UI for Gerrit was initially announced and integrated into Gerrit’s build process.

During the Gerrit User Summit, Google has presented the draft of a new Gerrit WEB UI. This time it is written using Polymer framework, which is a new JavaScript UI design framework from Google.

The new WEB UI will be fully written in JavaScript, making it easier for UI/UX designers to modify and faster to develop and compile.

As I mentioned before, during hackathon PolyGerrit project was integrated into standard Gerrit build system. It requires teaching Buck how to deal with javascript and its dependencies.

What is next?

Next is the Berlin Gerrit Hackathon in 2016. We’ve open a poll to gather input from the community about preferable date between 22nd of February and 25th of March 2016. Please participate if you would like to join us and hack Gerrit in Berlin 🙂

How to easy customize Gerrit Submit rules

Gerrit submit rule is a set of conditions that needs to be fulfilled before change can be submitter (read merged) to given branch. By default there are only two simple conditions:

  1. Verified +1 (V+1)
  2. Code Review +2 (CR+2)

First one means that change don’t break the build (or project integrity). This step can (and it should) be automated using, a continuous integration system like (jenkins with gerrit trigger plugin).  Automation here will save tons of men hours spent on reviewing code that doesn’t compile and/or break unit/integration/system tests.

Second one (Core Review +2) means that somebody from the team spent some time on reviewing and understanding the change. And this particular person didn’t found issues in it and thinks that this change is ready for production.

This set of rules seams to be reasonable and will be sufficient for “most” of the projects. But it has some flows, indirectly build in.

First of all there is no condition on the person that is giving the CR+2. In this case, change author can submit his own change, because there is no condition that would block him from doing so.

Also if you would like to enforce more strict review rules for given project. eg. at least two CR+2 are required to submit a change. You will probably end up with ‘internal convention’ not something that can be enforced automatically by Gerrit.

Of course, one can say that those two cases are exotic. Yes, in a way they are. But my point here is that default Gerrit submit rules are OK for (let say) 90% of projects. Projects that fallow Android OpenSource Project review principles (they can even don’t know that they fallow them ;)).

What is with rest 10%?

No, they are not forgotten by Gerrit… but they have a bit harder live at the beginning.

Why it is harder? Because of Prolog.

Gerrit gives you a tool for defining your own Submit Rules per project. But the entry point is (I would personally say) high.

To define your own Submit Rules one need to learn Prolog programming language, then understood Gerrit Prolog API and finally define such custom Submit Rules per each project in refes/meta/config branch.

This is awesome! Show me a tool that have such flexibility build in, ready to use … and it is free? Yes, entry level is high but, come on, this is one time investment and you are set for (almost) a live time… 😉

But maybe we could do something better here? Maybe we are missing something here… maybe we are not looking abstract enough.

Let me compare code review process to standard build process. In both cases you have some steps that need to be accomplished before you move to next mile stone. In build first of all source code files need to be compiled, same goes for test source files in next step. After that tests are run, and when they pass successfully, project can be packaged and put into production.

Same goes for the review process. First of all change need to be verified (compiled and tested) then team members are looking on code and if they found issues with it, change must be reworked. If not, it can be “packaged” to “production” I mean, merged to branch.

If we  use such approach, then maybe instead of writing code for review rules, we could have a configuration file. Why not put the configuration and convention over Prolog code?

Provided that we would have  such configuration syntax in place, then we could define set of rules that will verify the configuration file. Then wring UI for generating such config file shouldn’t be so hard (compared to generating Prolog code).

What if…

OK, lets finish with those “what if’s” because there is noting to wishful thinking. Why? Because we  already implemented such ‘configuration over Prolog code’ approach in CollabNet. This is what we called Quality Gate wizard.

It contains two key parts:

  1. Quality Gate Gerrit Backend plugin – that adds special Prolog fact capable of understanding XML based configuration parameter.
  2. Quality Gate RCP Wizard – Eclipse based desktop application (build into GitEye app) that allow you use one of 15 predefined rules, define new submit and edit existing one. Then  upload that to Gerrit. Everything from your desktop, no command line, text editor or git command is involved in that process.

More information about Quality Gates can be found in our blog posts 1 2 3

Learning Gerrit Code Review by Luca Milanesio

It is finally there! The idea behind this book was mentioned many times during the Gerrit  community meetups and finally Luca made it a reality! The Gerrit book is out there and it is pretty good read!Learning Gerrit Code Review book cover

I had an opportunity to go through this book and I must admit that this is a fully complete guide to Gerrit. You will learn not only how to use Gerrit, how to create, publish and submit reviews, but also how to setup Gerrit from scratch, integrate it with Jenkins/Hudson, GitHub and your corporate Single Sign On mechanism. Moreover, there is even an example configuration for Apache reverse proxy! If you are not familiar with Git Version Control System, you can even find there essential information regarding this matter. In other words, this is an exhaustive introduction to Gerrit.

In the book you will find an example of a code review workflow with a detailed description how things work in Gerrit, why and where to put ‘Change-Id’ as well as why it is so important for Gerrit. Apart from that, you will learn about Gerrit’s terminology and conventions used in the community such as WIP, RFC, ‘nit’.

All in all, if you are planning to start yours journey through code reviews with Gerrit, this is the position that I can highly recommend for you.

Gerrit London Hackathon May 2013

At the beginning of May 2013 first European Gerrit Hackathon took place in London. It was quite some time ago (more then a month), but in my opinion it is always good to have summary afterwards.

So, as I mentioned it was first Gerrit Hackathon in Europe, organized by Luca Milanessio in London and kindly hosted by ITHR Consulting. Twelve participants come from variety of  countries and industry areas to work together on Gerrit. We had three really productive days full with many interesting discussions, about project future and new improvements, and code sessions with interactive feedback loop (no delays or time zone differences) … just focus on task at hand and proper solution for it 😉

I think that main topic and killer feature of this hackathon was inline editing driven by Marting Fick (Qualcomm), Edwin Kempin (SAP) and Dave Borowitz (Google). As far as I know this is already available in current master branch (2.8-SNAPSHOT) and allow user edit its commit in the browser. By ‘edit’ I really mean editing files in browser and ‘commiting’ them back (of course this will create new patch set). With this functionality you can easily and quickly fix typos/white spaces/comments in code and commit message without fetching given change locally, amending and pushing back. This could save tons of time… but of course it could hit you very hard if you are not careful enough.

Another interesting topic, which actually is not often addressed during such events, was … documentation. Lets be honest, Gerrit documentation is good when you are a contributor/commiter, but for the new comers or end users it is simply unhelpful. Huge thanks to Fredric Luthander (Ericsson) who bring this topic up and did awesome work in this area! … I’m not really good in documentation, and still need to update Gerrit docs about JavaScript and GWT based plugin development hopefully will do it … in few months 😉

Next topic was statistics and some ground work around generating reports from Gerrit. AFAIR Edwin Kempin, David Pursehouse (Sony Mobile), Gustaf Lundh (Sony Mobile) and Emanuele Zattin (Switch Gears) had some discussions how stats can be collected and accessed. AFAIR there was also a (POC (?)) patch send for review that was adding REST service with some basic statistics.

I think that most of Gerrit administrators and contributors doesn’t know what term ‘capability’ means in Gerrit environment. So, ‘capability’ is ‘type of permission’ eg. ‘forge autor’ or ‘label verified’ are core Gerrit capabilities. Unfortunately plugins cannot contribute their own specific capability, this is really painful in case of replication plugin. Which uses ‘start-replicate’ capability defined in core (but not used there) to grant users permission to execute replication. Looks like this awkward situation was somehow painful for David Ostrovsky (independent) since he started working on this topic (together with Dave Borowitz). I know that there was some patches send for review and I hope that in 2.8-SNAPSHOT this problem is sorted out. Also this means that other plugins can contribute theirs own capability and extend this way Gerrit access rights.

There was also continuation of (never ending story) Gerrit multi-master configuration. As usually this topic was brought by Luca Milanesio and Deniz Türkoglu (Spotify) 😉

Deniz Türkoglu during hackathon was also working on ‘blame plugin’ for Gerrit. The idea is to send mails to code-line-authors when somebody changes specific line or code section. AFAIR there was serious problems with Gerrit API that disallow accessing DB out of RequestSope. Hope that this problem will be solved in nearest future and we could enjoy this plugin in community 😉

And finally, last but not least, my main focus area in Gerrit… the WebUI extendability. Together with Luca Milanesion, Emanuele Zattin and David Ostrovsky we tried to make Gerrit more extendable. David was pushing to server side UI extensions and already did some ground work for this, so I’ve picked up this idea and implemented server side extension port for contributing links to Gerrit top menu (code example), then Luca comes and integrated it with GitBlit plugin. But my main goal is to have native UI plugins in Gerrit either in JavaSript, GWT or ClojureScript (everything that compiles down to JavaScript) so few hours later I’ve proposed event based JavaScript API. Right now this is only a concept and I’m looking for feedback about it, currently it only allows to add rows into patch info table (code example).

Last few hours of hackathon I’ve spent on investigating GWT replacements for Gerrit WebUI. After investigating some possibilities I’ve chosen AngularJS and did initial hacking. There is not much to share right now, I can just say that implementing project list page in Angular was really fun and straightforward. But playing with new JavaScript ecosystem was quite pain for me … maybe I’m to Java-ish ;). Currently I have replacement for current GWT based project list page in Angular, but this was the easy part (I think) more difficult would be to integrate this with current Gerrit GWT UI and build system.

As we are in ‘build system’ topic… during hackathon decision was made that Gerrit will give a try to Buck (ant-like, developed in Facebook, similar to Google’s) build system. In Gerrit 2.8 you will not find pom.xml but BUCK file, this transition should make Gerrit development and releasing easier. Gerrit’s Buck srripts can generate Eclipse project configuration files, also use maven repositories for fetching dependencies. I can confirm that with without tons of Maven projects in Eclipse, IDE is more responsible, GWT development is faster and easier… also build time is shorter … but … there are the down  sides as well. Buck is only supporting unix like systems and it is not (yet :)) an industry standard.

OK, I think thats it… I had a great time during this event also during my morning runs in Kensington Gardens. Hope to visit London again. See you all on next Gerrit Hackathon/User Summit 😉