Ideas to fuel a sustainable built environment

7 min read

Pitfalls of Canned New Construction Specifications

Jun 17, 2015 6:00:00 AM

Recently I have performed new construction commissioning services on a project with a job specification document that was over 1200 pages in length. As you can imagine, it was very comprehensive. However, I found that it was comprehensive in all the areas that it didn’t need to be, and not specific enough where it mattered.

Image by Flickr user Horia Varlan Image by Flickr user Horia Varlan

New Construction Specifications

It seems to me that the new construction industry has undergone a commoditization of certain services as a reaction to the demand for ever lowering budgets and shorter construction schedules. Building automation systems are very subject to this commoditization and as a result there are certain pitfalls being created in the specifications used in our industry. I’ll be focused on those specifications concerning control systems.

Pitfall 1: “CYA” Spec Sections



The Controls Contractor shall fully simulate the BAS for all phases of operation prior to site delivery of control system.

The simulation shall be performed in order to fully test the system prior to installation.

The simulation shall be performed off-site at the Control Contractor's facility.

The testing shall include each BAS panel's control program and system graphic.

The testing shall include the entire system as a whole; each panel shall be networked together along with a workstation during the simulation.

In above section, it calls for the automatic temperature controls contractor (ATC) to fully simulate the building automation system (BAS) prior to delivery of the control system. In the real world, this is impractical and only happens if the controls contractor has pre-programed (or ‘canned’) programs that have already been tested. Since buildings are highly customized, this is rarely the case. The only other way a contractor can satisfy this requirement is if the controls contractor put a high price on the project to cover their time. This essentially takes a testing procedure that normally occurs twice on a large project (during ATC startup, and then again during Cx functional testing) and adds a third go at it. It’s overly redundant, and probably being ignored by the ATC and not enforced by the construction manager.

The reason that section exists in a spec is so that someone can point to this spec section when something goes horribly wrong in the 11th hour and use this text as a bullet-proof vest for everyone involved, except the ATC. This type of situation is counterproductive to a team effort and can create a bad atmosphere for delivering a building to an owner on time. If I were writing this spec, I’d just do away with this section all together. There should be enough verbiage in my spec to ensure its functionality (with the aforementioned two rounds of functional testing).

Pitfall 2: Specifying the Impractical or Impossible



  1. Comply with the following performance requirements:
  2. Graphic Display: Display graphic with minimum 20 dynamic points with current data within 10 seconds.
  3. Graphic Refresh: Update graphic with minimum 20 dynamic points with current data within 8 seconds.
  4. Object Command: Reaction time of less than two seconds between operator command of a binary object and device reaction.

In above section, it specifies the response time of a control system. This actually exposes a disconnect between the engineering community and the ATC community. With the exception of network tuning practices (especially on LonWorks Networks), if the ATC has followed all the manufacturer’s instructions to the letter, the response time of a point update from the unitary controller to the graphics is a function of the number of devices on the network and their traffic. Therefore, most ATCs cannot predict the reaction times of a network down to the second, or even the minute, during their design phase and still remain competitively priced.

Photo by U.S. Army Corps of Engineers via Flickr. Photo by U.S. Army Corps of Engineers via Flickr.

In 12 years of experience as a former ATC programmer, I’ve never actually had a commissioning agent or design engineer hold me to this type of specification. If the response of a controls network was slow, it was because the customer wanted 13 eggs in the carton they purchased from me for the same price as the standard carton. Upon seeing the slow response time, an engineer / Cx agent would question me about it, and I would explain the overly complex system jammed into a small network, and they would generally agree with the statement, “I can’t make it any faster without the owner spending more money.[1]” This highlights the impact of the commoditization of controls in the construction industry, or as I think of it, the demand for all the luxury features at the economy price point.

Pitfall 3: Specifying to the point of torpidity



Trend on month of data as follows:

  1. Trend all analog input values on a 30 minute basis.
  2. Trend all digital input points on a change of value basis.
  3. Trend all analog virtual points on a 60 minute basis.

When trending indicates system instability for certain points, set-up additional trending for one week as follows to facilitate tuning and trouble-shooting:

  1. Trend all associated analog input points on a 10 minute basis.
  2. Trend all associated digital input points on a change of value basis.
  3. Trend all associated analog outputs on a 10 minute basis.
  4. Trend all associated digital outputs on a change of value basis.
  5. Trend all associated virtual analog points on a 10 minute basis.
  6. Trend all associated virtual digital points on a change of value basis.

Reporting system shall automatically email trend reports to the Engineer and the Commissioning Agent on a daily basis.

This spec item makes sense as a means of ensuring the scope of the ATC’s trending software is adequate. Trending’s depth and breadth in a control system is all over the place depending on your ATC. This lack of standardization is something the controls industry should be addressing within the next five years (analytics & big data) and should’ve addressed five years ago. All data points should be trended always[2] (unless the DDC control system lacks a PC/server on which to store the data).

The downside to this spec item is that it is too specific. A more abstract simplified set of statements are more likely to get the desired result. The ATC is going to do what they’ve always done unless they’re called out by an engineer or Cx agent. To change each set of points trending-characteristics is relatively easy in most systems, but can be labor intensive (thus cost prohibitive). A better spec line here would be something to this effect:


Trend every analog point in the system on a 15 minute interval. Trend all Booleans on a change of value basis.

  1. Store all trended data indefinitely for future review by the owner.
  2. Estimate at most 8 hours extra to modify trend intervals and configurations on problematic systems identified during startup and commissioning.


As an industry we need to work towards finding the balance between a comprehensive spec that achieves the owner’s project requirements, but also doesn’t dissuade the contractors bound to it from actually reading their sections. Like any good legal document (and I know, there aren’t many out there), it should convey the content in easy to digest chunks without raising the labor required to satisfy its requirements. Its intent should be simply defined and clear, while leaving the details to the contractors. If the intent isn’t met, then they’ve not met the spec and that should also be easy to analyze. If you’re writing a spec or reviewing one for building controls and find yourself wanting a second opinion, please get in touch with us.

Subscribe to our blog



[1] In this situation, it would involve breaking up a network into smaller networks that are functionally grouped based on the data dependencies as to optimize each sub-network’s traffic. This usually meant purchasing more hardware and the labor required to re-program and setup the optimization.

[2]I say this because a double precision number consumes 64bits of data (8 Bytes of storage for a huge number). Today I can purchase a three terabyte (that’s 1012 bytes) drive for $100. Even assuming that storing the number and its associated metadata overhead quadruples the space consumed (32 bytes) per record; it works out so each trend record costs fractions of a penny ($3.2-9 Yes, that’s an exponent of negative nine!).

Written by Rick Stehmeyer