If your objective is to keep your customers happy and satisfied, the best way I have found to do that is through involving them with your development and releases from the outset. When you show them that you value their opinion, take it to heart, and get their feedback from the beginning, not only will they be happier, but they will have the exact product they asked for. If I created a mantra for my company it would be ‘Early and Often’, builds, testing, integrations, collaboration, promotions should be done early and often. The faster we turn around features to the end users, the faster we get feedback, and the faster they’re promoted to production.
From the developer, and release engineer perspective everything needs to be automated. Nothing should be left to manual processes which can be time consuming and error prone. The objective for releasing a software product should always be to strive for making a non event. Everything needs to be repeatable, and automated, the only people that should worry about a release should be the marketing folks. Integrations and testing them should be automated and done automatically on check-in of source code.
I learned through trial and error, and experience what works for me and my organization. When I started my career in IT I was hired as a junior Java Developer, from the outset my role included being the release specialist. The first couple months on the job I implemented a linux based source repository called Aegis which forced clean builds and approvals prior to integration/check-in into the master branch. So naturally I have always been interested in different Change Management and Project Methodologies. The first three years into my professional career, the company I was working for was transitioning to CMM L3 which I was heavily involved in as I myself transitioned out of the developer role and became the Configuration Manager for the organization. CMM essentially is a document and process standard which depending on the level supported forced repeatable standards and processes when it came to everything from sales, development, corporate training, and support.
A small part of this was the organized structure for project management and development, CMM itself didn’t necessary cared how you conducted development or managed projects, but it wanted to ensure you followed a standard / repeatable process and plan. When I was in school, Agile wasn’t on the curriculum, we spent our Project Management classes only learning only the Waterfall model, both methodologies can fit into CMM.
CMM stands for Capability Maturity Model, and is a standard for policies/procedures that allow organizations to have repeatable and sustainable processes, it was primarily popular in government. Generally speaking CMM was very popular 10+ years ago, and is slowly dieing to the Agile movement. I spent a couple years assisting our organization in the transition to CMM L3, and preparing for an audit. In doing so I learned a lot about what does, and doesn’t work, and just how much of a waste of time/money it is for an organization to follow such rigorous processes.
CMM has five tiers:
- I: Initial (chaotic, or ad hoc); generally everyone with an undocumented process
- II: Repeatable; a process which is repeatable, so a set outline of required documentation
- III: Defined; a standard practice defined/confirmed to business processes
- IV: Managed; quantitative measurements
- V: Optimizing; process optimization/improvement, ever evolving.
Obviously given this, technically any project management methodology to some degree fits within CMM. We built rigid guidelines on what documents were required for a project to be signed off at any stage, and our QA department performed regular audits to ensure the processes were being followed. Eventually I built an automated way of sifting through our source repository (Perforce) for documents, and performed automated audits. We had dozen’s of required documents from Sales to Go Live, some of which spanned 100’s of pages. We would spend weeks/months on a Design document or Functional Specification only to have everything change by the end of the design phase, or worse yet, by the end of development. It wasn’t uncommon for us to spend a month collaborating on a design document with a half dozen programmers & architects only to have everything turned up sign down when we tried to get sign-off by the customer/stake holders in order to move onto the next phase.
Repeatable and definable processes themselves were actually quite beneficial, my concern revolved around requiring sign-off with each document and each phase, many of the documentation were never read, and requiring sign off just delayed project start. I would rather spend the lost time building prototypes or starting development, while collaborating with the customer throughout development.
So I saw from the outside looking in on all the projects what I felt worked, and what was deemed a waste of time. On one particular project which was fixed bid, so we tried to manage risk/limit change as best as we could to prevent our profit from shrinking. We spent nearly 3 months designing the system before we even touched a line of code, and that was after writing and getting sign off the following documentation (that I recall, I know there was others):
- Statement of Work
- High Level Estimate
- Project Charter
- Configuration Management Plan
- Detailed Estimates
- Project Management Plan
- Quality Assurance Plan
- Quality Control Plan
- Test Sweeps
- Functional Specification
- Detailed Design
- Release Plan
- Weekly Reports
- Gantt Charts
As you can imagine, a great deal of time was wasted on documentation that someone read once, and generally was never looked at again or followed. It was my job to take all the documentation and archive it in our corporate library, and audit each project to ensure they followed the appropriate policies and in the correct order. Each step had to be completed before we could move on.
My first project team was isolated from the rest of the company, we predominately followed a hybrid Extreme Programming/Waterfall model, while the remaining project teams followed Waterfall. As the months went by our product became vapour ware, and the team dissolved, and we became solely a waterfall company. I was a big proponent of Waterfall when I was a developer, but once I was on the outside looking in, and saw the amount of effort needed to keep track of every minute detail, and ensuring the gantt charts were accurate on a daily basis, I realized just how ineffective it was as a method of measuring progress. At the end of the day, it allowed managers to see pretty graphs, and it made them feel better about the progress and that was really it. It takes years of experience as a programmer on a project to learn how to build accurate estimates, and a gantt at the end of the day is only as good as the estimates. Which are generally very poor.
Change Management was managed via meetings every two weeks with the customer, because we already had a defined schedule and the projects were usually fixed bid, it made it difficult to approve new changes. The customer didn’t want to pay more, and we didn’t want to cram more work into a finite timeline. Approved features were added to the gantt and each of the above documentation were updated to reflect changes and pricing updates. More often then not, the customer didn’t want timelines to change so we worked increasingly insane hours to keep up.
I tend to be very analytical and structured by nature, I create daily todo lists for both work and personal, and follow them religiously. But my aim has always been to do my best to complete them, and if I don’t they flow into the next day. But even so I only believe in process and structure that allows my project to proceed effectively and efficiently, thus I feel it’s best to do little documentation, move into development as quickly as possible and follow up with the customer on a weekly basis to ensure your project is on the right track. We manage change by advising them that changes will require work to be removed, or the timelines move out.
Being agile is essentially a grouping of development methods which promote collaboration, self-organizing, adaptability and cross functional teams. It encourages early development, delivery, and continuous improvement.
When I transitioned to my current company three years into my career, I was hired as their first full-time programmer. This allowed me to single handedly create the policies and procedures from the ground up. These processes have changed drastically over the years as I feel process should always be evolving, which is the whole point of being Agile. In the beginning we fully implemented Extreme Programming (XP), as this is what I had experience with. I read and learned as much as I could as we only utilized the programming aspects of XP at my previous job. We had two programmers in a shared room, so we glued cork to one wall, and used string to separate the stages of a story, queue cards were used to keep track of features and bugs.
Extreme Programming is essentially taking the traditional waterfall model, and repeating in cycles every two weeks. This allowed us to define or refine, development, and have something close to releasable every two weeks, the constant collaboration with the customer or stakeholder of the project ensured the end result was always being refined, and final product was exactly what everyone wanted. This also allowed us to continuously integrate, test, and release the build to QA every two weeks. One of the first things I did was to install Subversion, and a continuous build environment called CruiseControl. We used the cork board to track user stories and bugs, and it had three columns: Backlog, In Progress, and Complete. We successfully used the board for nearly a year, but I wanted more metrics, I wanted to know quickly what our velocity was, and how close our estimates were. At this point we were transitioning to working from home part-time, and provided a good opportunity to try out a web application to manage our projects.
I installed XPlanner, this allowed me to get the metrics I wanted. We still used story points, but I preferred tracking hours as I wanted to know how accurate my estimates were. To this day I still use both story points and hours to track my work. We used junits to test our business logic, and manual testing for acceptance and regression testing.
As the years went on we refined our processes, and now we use something more similar to Scrum.
Around 2007 we implemented Selenium to automate much of our regression testing, we still use manual testing for boundary and capturing errors or missing regression tests however. Automated regression testing combined with CruiseControl allowed us to build a continuous integration/testing environment and report back as soon as we check in source code if we broke our baseline. It was our policy to build user stories and/or tasks that took no more then 1-2 days, so we had to check in our source code atleast every other day. This allowed us to know quickly if everything is still compiling and passing our regression tests. Once a new feature is developed we build in new automated regression/acceptance tests to validate the the feature.
I wasn’t entirely happy with XPlanner, I was on the look out of a replacement for years until I found one I was finally happy with in Jira – Greenhopper around 2008, and we’ve been using it ever since. Internally we actually follow two different agile methodologies, Scrum and Kanban. Generally internal non development projects follow Kanban, and the rest follow Scrum. I’ll discuss our Kanban procedures more later.
We manage change management on an as needed basis, one of the aspects of software development that I have learned, and have never shied away from is scope creep. If we don’t allow scope creep, at the end of the day our customers will never be satisfied with the end result, no matter how prepared you think you or they are. There will always be changes after they have saw and played with the product. In saying that, there’s always a fine balance of managing what can wait until a patch at a later date. At the start of a project we define all the user stories for a release, lets say we have release 6.3 planned for a October 1st go live. We get the team and stakeholders together, and select all the user stories with story points estimated, at this point we have an idea of what our average velocity is for a two week iteration based on previous projects. So we know approximately how much work can be completed within the allotted time. We always work on features from highest to lowest priority, and hardest to easiest. This allows us early on to know if a release may need to be refined or shifted.
At the start of the release we choose which features will be chosen for the next Iteration (also sometimes called a Sprint), the features are chosen based on the priority and difficulty of the user story. Once a story is added to a release, the programmers select which stories they want to work on and create sub tasks for the user story with hour based estimates for each task. The iteration is started, and generally every day we try to meet internally to ensure everyone is on the same page, however we’re a small team, often times we don’t have a formal stand up meeting as we talk over IM for much of the day and any questions come up naturally. At the end of each iteration we close off on the finished stories, and migrate any open stories back to the release backlog. Naturally these stories move back to the top of the queue, and are most likely going to be selected for the next iteration. The completed iteration is deployed to QA, and our customer/stakeholders are provided access to give us feedback. This provides our customers ample time to provide feedback, often times a customer doesn’t necessarily know what they want until they see the end product, thus its important to refactor and refine your product to make changing it as painless as possible. Any feedback is logged as user stories and assigned to a future iteration. Even for internal product releases we always have at least one customer represent our entire customer base, its always someone from our User Group. Near the end of a release cycle we provide a half dozen users access our QA site for acceptance testing and feedback.
Our second methodology we follow is Kanban, its essentially like Scrum but with less process, more similar to XP. Thus we generally don’t follow it for development releases, we use it internally for operations, marketing etc. One of the biggest differences you’ll see with Kanban from Scrum is the removal of Iterations. The purpose of Kanban is to remove the time box, this is especially beneficial for those development teams who also do maintenance programming or are likely to be interrupted during an Iteration. Kanban came from Toyota where they developed a card system that required a card to be presented to each assembly line to ensure they didn’t have excess materials being used then necessary.
The time box is instead moved to the workflow state. Each workflow state such as TO DO, or In Development limits the amount of tasks which we call Waiting in Progress (WIP). If we limit the TO DO workflow state to say 5 use cases, this forces the customer to remove a TO DO item back to the product backlog if they change their mind and want to add a new item.
The developer’s then pull this TO DO use case into the In Development or ongoing workflow state when they’re ready to do so. The releases to testing are then event driven, so when the developers are ready to release the product to Testing they will do so. This can be on a use case per use case basis or weekly, but generally won’t be any longer then every two weeks (if its a very big use case). We almost never use Kanban for development unless we’re working on a release without a timeline and we’re expecting interruptions.
At this point we begin planning our rollout, starting with our SaaS customers, and then coordinating potential upgrades for each of our self hosted customers.
We manage a release wiki on Confluence for tracking story questions, or providing release / iteration burndown graphs for progress reports. Naturally a release plan is always very minimal, we list the stories for the release, a couple graphs showing progress, and any large user stories with rough mockups/questions. As mentioned earlier, we try to do only as much documentation as needed to get started on coding, we allow the product itself to speak to the customer and refine from there. I’d rather spend a half a day coding something for them to see and play with, then building a document, because they will have more questions/refinements after playing with it then after reading a document.
Even though we pride ourselves on being stable every two weeks, we support a half dozen browsers and various release environments. So we still give ourselves one month after code complete to finish off any last minute testing before migrating to our SaaS or self hosted customers. I have always been very paranoid about buggy releases, and would rather over prepare for a release, then release something completely buggy and deal with the aftermath.
We usually strive for one major, and one minor release a year. Meeting atleast twice a year to go over and re prioritize our backlog of features, we build an annual roadmap which we strive to meet, however at times we take on “for pay” features, and this delays baseline features from our roadmap.
I believe being Agile falls under not only development, but all aspects of a business including operations. While we prefer to keep development documentation to a minimum, I’m not anti documentation. I believe in minimizing bus factor, so every repeatable task, policy and procedure must be documented in case a new employee needs to be quickly brought up to speed. Our document archive contains hundred’s of documents ranging from how to install/deploy a development server, frequently asked questions for salesman, to Success Profiles for Human Resources.
Part of being Agile is keeping everything as simple and automated as possible, this includes all of our user and server environments. Years ago we moved all aspects of our business to the cloud including the following:
- Source Repositories
- Continuous Build servers
- QA servers
- Issue Tracker
- Production servers
This allows us to work without worrying about backups or maintaining infrastructure and it allows us to focus on what were good at which is supporting and developing our product. Ina addition, our development environments are setup as virtual machines to allow a new programmer to easily spin up a new build environment, and in my case I love to use them to run automated tests in my virtual machine while I continue to work on my main system. Plus I run Ubuntu as my main development box, and use Windows based virtual machines for my local continuous build environment.
Where do we go from here
I feel like if your not always looking for ways to improve your processes then your likely delusional, and no one has everything running absolutely perfectly, there’s always room for improvement. In my case, on my wish list is to further modularize our product to speed up build promotions and minimize risk of upgrades, as well as automate build promotions. Right now to deploy a new build to production it requires a resource to be available at night, to upgrade the WAR, and reboot the instance which requires a 3-4 minute downtime of the server.