Main Page-v4b

From NU-IT
Jump to: navigation, search

NU: ICT support for a Smart and Green building

Document history:

version date authors
0.4b May 30, 2016 Wiki feedback UvA/VU; snapshot for FCO
0.3 April 26, 2016 Kees Verstoep, Robert Belleman; verwerking commentaar 1e projectmeeting
0.2 April 18, 2016 Kees Verstoep, Robert Belleman; verwerking commentaar Boy Menist; verwerking info “The Edge”
0.1 April 8, 2016 Kees Verstoep, Robert Belleman

This document describes the requirements for the ICT infrastructure of the future new home of the Informatics departments of VU and UvA: the “New University” (NU) building at the Amsterdam ZuidAs. This building will be an important factor in realizing our research ambitions, providing an excellent environment to participate successfully in often externally funded research programs.

Past experiences with the UvA/VU Beta collaboration in the O|2 (VU) building were that substantial investments were required to make basic facilities (like building access, networking and printing) available for users from both organizations. For the O|2 building, the ambitions with respect to ICT functionality were rather limited: to provide a UvA employee with access to the same facilities previously available at Science Park. The NU building will house both Informatics departments of UvA and VU (i.e., the combined Department of Informatics, DoI), therefore the requirements with respect to ICT support are significantly higher.

Truly benefiting from co-located Informatics departments collaborating on many projects, making use of all state-of-the-art facilities provided by the new (NU) environment, will still require much preparation work. This document integrates feedback from previous information gathering rounds at UvA and VU, and is meant as a repository of requirements that will be used for the initial design of the ICT-related facilities in NU.

This document lists a number of important ICT-related themes, for each of which the following structure is used:

  • Description;
  • Requirements with respect to NU;
  • Organization;
  • Costs.


With respect to “Organization”, it requires mentioning that UvA and VU have a somewhat different structure in how ICT is organized. Both UvA and VU have a “Central IT” department that manages the main IT infrastructure and services (which includes e.g., network, mail, storage and desktop support).

Separate from that, UvA/FNWI has a “FEIOG” group which provides more specialized ICT support for the Beta domain. At VU, an “ITvO” group exists which has similar ambitions, but then for research in general, and it is part of central VU/IT. It should be noted that FEIOG and ITvO are very small (about 6 FTE each) compared to Central IT at UvA and VU (over 200 FTE each).

At the department level ICT support exists mostly in the form of “Scientific Programmers”, which also manage part of the Informatics-specific research equipment. In the remainder of the document, all Department of Informatics ICT support personnel will be grouped under the term “DoI”, although some differences exist how they are organizationally embedded at UvA and VU.

The document ends with a section listing action items for follow-up projects that need to be completed in preparation of NU.

ICT at the workplace

At NU, the ICT support related to desktops and on-campus IT services will (most likely) be provided by VU/IT, due to their physical proximity. However, most support services at the ZuidAs campus are currently only available via a “VU-net” web portal, which is only accessible using a VUnet-id.


We need to make sure that UvA and VU employees can effectively work together in NU and will have the same level of support. This will mean that either:

  • VU-net services will also be available for UvA employees/students via UvAnet-id’s, or
  • UvA employees will get an additional VUnet-id (maybe as a temporary measure; UvA students already get a VUnet-id when needed).

The experiences with existing UvA/VU collaborations like ACTA and in the new ones in O|2 will be evaluated to determine whether the solutions implemented there will also be acceptable for DoI, or alternative solutions are to be implemented.

An important aspect of day-to-day work experience will be how the offices will be designed, in particular to what extent in in what form a “FLEX” concept will be applicable for some of the employees. This will also have an impact on workplace ICT support, e.g., requirement for a versatile KVM (Keyboard, Video, Mouse) solution that works for Windows, Mac and Linux laptops.

We will need to check experiences with the current solutions applied in the O|2 building. Another useful step would be to let some of the employees already before the move to NU integrate (say, for one day of the week) with a partner section at the other institute. Limitations due to insufficient cross-organizational ICT support should quickly become visible. This can then be tackled by formulating specific collaboration use cases that will need to be supported in NU.


Workplace support is done by Central IT


Costs to develop cross-organizational ICT support will likely be substantial, as experience with the VU O|2 building has shown. However, extended functionality for UvA/VU collaborations in O|2 is already part of an ongoing project at VU/IT the coming years, and DoI at NU is likely to benefit from this.

Building and Department facilities

The combined Informatics department will be an important user of NU, but not the only one. Collaboration with other departments or faculties in the NU building are expected to be common. Each department should have access to facilities that are central to its daily operations: telephone, printer/scanner/copiers, productivity software.


Access to the building should be possible outside regular office hours. To facilitate collaboration with researchers outside DoI, it should be easy to arrange (possibly permanent) access to the floors where DoI researchers are located. It should also be easy to arrange meeting rooms, both by on-line booking in advance and by ad hoc reservation.

There should be access to a sufficient number of printers, scanners and copiers, preferably integrated into the same device; assuming the DoI is divided in groups of 50 people, one multi-purpose device per group seems reasonable. All printers should be capable of full colour printing (as is the default at the VU). Scanning should support multi-page, batch scanning to digital format (e.g. PDF).

There should be access to central productivity software: “office” software (document, spreadsheet and presentation development), software development, software revision management, project management, bug tracking system, web hosting, database management, Wiki.

All facilities should be supported by multiple operating systems and should also be accessible by students and staff that bring their own device (BYOD). Operating system support for Windows, Linux, MacOS, Android, iOS.


General facilities will be handled by VU/FCO (VU Campus Organization for Facilities) with support from Central IT for identity management.



Educational Facilities

NU wil be a multi purpose building; support research, education and leasure. The Educational faclitites will replace a significant part of the W&N educational facilities. W&N currently offers lecture space consisting of 51 rooms with 2891 seats and 18 computer rooms with 462 available computers, most of them Windows based, 48 of them Linux based.

Regarding these facilities we encountered a significant difference in the UvA and VU approach. The VU still works from the idea that, during classes, each student should be able to use a PC of the VU. The VU supplies the PC's including the required software. The UvA does not supply PC's for students anymore and requires from each student that he/she has a self owned notebook. The "computer rooms" of the UvA only deliver power lines and WIFI. The UvA supplies the students with the software they need.


Lecture space in W&N consisting of 51 rooms with 2891 seats in total. Preferably these rooms will be as flexible as possible in mapping of the furniture and supplied with state of the art presentation and interaction devices like interactive smart boards, video recording equipment and sufficient WIFI capacity.

The lecture and computer rooms are not the only educational facilities that NU will require. Specifying the educational IT requirements of the NU building requires an impression of the mayor educational developments. There we see the following developments:

  • IT has taken a central role in higher education
  • IT supports both the educational content and the educational methods
  • Education is moving toward a more personal approach
  • Education is becoming more and more (inter-)active
  • IT is moving toward "bring your own devices" (BYOD) support

This leads to the following facilities:

  • Virtual desktop facilities: virtual PC's for standard applications, virtual servers for more computing intensive tasks
  • Virtual collaboration facilities: portals, storage, video, etc.
  • actual collaboration facilities: team spaces with presentation (screens) and communication facilities.

Especially for Informatics there will be additional requirements directly related to the specific content of the Informatics courses: virtual desktops fully controlled by the Informatics department.



ICT in lecture rooms and meeting rooms

NU should support multimedia facilities in all lecture and meeting rooms: a monitor in small rooms, in large rooms a beamer of sufficiently high resolution (full HD or better) and lumen. Connectivity should be supported for standard PCs and laptops. Large lecture rooms should support audio amplification for both lecturer and laptop. A support group should be readily available in case of problems with multimedia facility in lectures rooms.

At least one meeting room should be available with high-end teleconferencing facilities to support remote collaboration; Skype and Webex are free/cheaper solutions which will suffice in most cases. Large lecture rooms rooms should be equipped with (operator-free) support to record MOOC course material. When large lecture rooms have multiple screens, both the possibility to mirror the contents on the screens and to show different content on the screens should be possible.

One or two large “info screens” should be placed in the NU central hall that show which lectures/events are taking place in which lecture rooms, news flashes, announcements of upcoming events, etc.


More detailed requirements:

  • Meeting rooms: monitor, 40 inch, full HD resolution at minimum (4K preferred, cost is no longer prohibitive); connectivity:
    • wired: VGA-D15, HDMI, DisplayPort
    • wireless: Airplay/DLNA/Miracast
    • power: outlet at the center of the table
    • Possibility of deploying "Smart Whiteboards" in few rooms (e.g.
  • Lecture rooms: beamer, full HD resolution at minimum, connectivity:
    • wired: VGA-D15, HDMI, DisplayPort
    • wireless: Airplay/DLNA/Miracast
    • Audio amplification; wireless microphone for lecturer, laptop audio connection
    • power: outlets at each row
  • Teleconferencing equipment
  • Recording equipment to record MOOC videos in a designated lecture room:
    • Cameras
    • Three-point lighting
    • Backgrounds: greenscreen, neutral background
  • Large info screens in central hall: monitor, full HD resolution at minimum


Lecture and meeting rooms are managed by VU/FCO. VU/FCO has links with VU/AVC (Audio/Visual Center) which can give special attention to higher than usual requirements (AVC also provided technical support for the VU Intertain Lab). If needed, some of the audio-visual infrastructure may have to be installed by a third party.


Monitor: 40 inch: ~1.5kE, 60 inch: ~3kE (excl. wireless)

Beamer: ranges between 1kE and 20kE, depending on size of the projection (excl. screen).

Teleconferencing equipment: ~5kE?

Video recording studio: ~10kE


For DoI, networks are amongst the subjects of research and education. This means that researchers and students should not be hampered by the ICT facilities in the NU building. Lessons learned by the UvA Informatics Institute with the move to the FNWI building at Science Park, as well as the transition to a centralized ICTS department, have shown that this is not obvious and therefore warrants mentioning. NU should support various types of networks to accommodate current and future research projects. Besides the standard networks supported and managed by the central IT department, additional network facilities need to be allowed (including routing and switching infrastructure and physical ports in each room) that can be managed by Informatics research/support staff.

The VU campus network currently provides several zones that provide different functionality, which is linked to the status of the equipment attached to it:

  • green: fully managed by VU/IT;
  • red: managed by the user;
  • purple: managed by the user, with no network filtering applied (for servers);
  • “research network”: similar to purple, only this VU/FEW only, and predates purple.

In NU, the “research network” concept should be revived, e.g., to support ScienceDMZ functionality (see, IoT devices and applications, and other DoI research scenario’s for which the standard “managed” VU/IT network would be too limiting.


More detailed requirements:

  • High bandwidth access to the Internet, of at least 10 Gbit/s;
  • The wired network should be at least CAT6-based to allow for bandwidths above 1 Gb/s;
  • Support for Power-over-Ethernet (PoE);
  • 802.1x support (Identity based network access control) on the regular networks;
  • Possibility to install and manage network equipment on dedicated research networks:
    • Possibility to use software defined networking (SDN);
    • Possibility to add an additional switch or router to a research wall outlet, for specific experimental usage;
  • Lightpaths to remote resources (at Science Park and elsewhere) should be possible in all server rooms; in particular DWDM fiber connectivity for SNE research;
  • There must be at least two network ports per workplace (preferably coloured differently to clearly tell them apart) to conveniently allow access to the research networks in all rooms: one “standard” port that is protected from the outside world, the other specifically for research and education that can be fully configured to support specific applications.
  • The standard (IT-supported) WiFi network should also easily be available for ad hoc guest usage, not requiring administrative preparation in advance;
  • Also a separate WiFi network will be required for research purposes, e.g., to allow high density of devices in some areas, to support robots, drones and Internet-of-things devices, to provide indoor location assistance, direct access to monitoring information, etc. To avoid interference of the general WiFi and the research WiFi networks, some policies regarding (e.g.) channel usage need to be agreed upon. Alternatively, a special "DoI-research" SSID needs to be made available in NU on which devices can be registered (ad hoc) on a research VLAN.

Architectuur informatica.jpg


The standard wired/wireless networks will be managed by Central IT. The research-specific networks will be managed by DoI.


The SNE research group at the UvA Informatics Institute currently has access to a high-speed DWDM fiber network to support their research. This has been made possible largely because of the proximity to SURFsara and AMS-IX at Science Park. A DWDM solution between UvA and VU data centers is currently being investigated. As this would significantly benefit the UvA/VU IT support departments themselves, it may be expected they can cover most of the base costs, while the additional equipment to make this suitable for networking research could be funded by DoI.

The additional wall outlets should not be very expensive, as this is part of a new installation; adding ports at a later time would typically be much more expensive.

Guest usage of WiFi is already being rolled out in the new VU O|2 building, so no additional development costs are to be expected here.

The costs for an independent “research” WiFi installation would depend on the scale at which this would be deployed: at a few specific places only (e.g., in the labs, see below), or in the entire NU building.


UvA and VU provide various data storage options:

  • Central IT storage with limited capacity per user (typically tens of gigabytes maximum), which is tied to the regular UvAnet-id/VUnet-id account;
  • SURFdrive provides data sharing across university boundaries, but given the capacity (typically gigabytes per user) and protocols supported, it is not suitable as a high performance data storage and sharing facility;
  • Large-scale storage for specific projects (e.g., VU offers “SciStor”, a 200 TB storage facility accessible/mountable via VUnet-id credentials; UvA provides a Faculty Storage Service accessible by UvAnetID); this provides reliable storage at a price point that is close enough to bare disk hardware costs, so there is no reason for researchers to buy and manage their own small RAID boxes;
  • HPC compute resources (see below) usually have their own large scale storage solution integrated, however, this is typically just for internal use on the system itself.
  • Several research groups at UvA have installed their own servers equipped with storage.


With UvA and VU researchers collaborating intensively inside NU, it is clear that the current storage and data sharing options will be insufficient.

For example, what is still missing, is a high performance, high capacity, data storage facility that supports projects that involve both VU and UvA members (SURFdrive is neither high capacity nor high performance). A pilot project to provide such a system is currently being started.

In addition, at UvA/FNWI a facility is constructed within the project “Research Data Management” (RDM). A decision by the board of UvA in december 2014 requires that all UvA/HvA research units manage research data according to a “Research Data Management Plan”. Specification of proper RDM guidelines are also on the agenda at VU, and under discussion in the Beta VU IT committee. Given the DoI collaborations in NU, upcoming UvA/VU RDM guidelines should be properly aligned. For an overview of the UvA RDM policy and guidelines, see


Central IT provides support for regular storage. Support for high performance, high capacity storage will most likely be provided by Beta IT support (UvA/FEIOG and VU/ITvO).


Small scale storage is charged at a fixed rate per employee, with additional costs when the default quotum is insufficient. However, the costs per Gigabyte are relatively high (VU/IT charges EUR 4.20 per additional Gigabyte per year). Large scale storage is much cheaper on average (for comparison, SciStor is charged EUR 0.25 per Gigabyte per year); it is typically charged per project, based on storage capacity reserved for the project.

The Faculty Storage Service at UvA costs EUR 110 per Terabyte per year without replicas, EUR 125 per Terabyte per year with replicas (if there is sufficient demand), up to a maximum of 100TB (beyond that, specific solutions will be designed). Requests up to 10TB can be honoured quickly. Costs for a combined UvA/VU storage facility to be developed should be similar.


Apart from processing capacity provided by regular desktops and laptops, many Informatics researchers depend on various other computing resources, e.g., for the important Big Data and High Performance Computing (HPC) domains. Some of these resources are housed locally and managed by Informatics support staff, and some are housed and managed externally.

Locally managed resources are:

  • High-end workstations, sometimes equipped with GPUs (some specifically for computing instead of visualization, some specifically for visualization); typically managed by the researchers themselves;
  • DAS-4 and DAS-5, which are both at UvA and VU, but are also part of a larger national distributed system for Informatics research; DAS is not a general-purpose "production" HPC service, however (it also serves a secondary role for teaching purposes);
  • General-purpose compute servers managed by central IT, FEIOG (UvA) or ITvO (VU), or the research groups themselves.
    • UvA:
      • SNE: 2 racks (including one optics rack)
      • ILPS: 2 racks
      • CSL+ISLA+CSA: 0.5 racks
    • VU:
      • BioInformatics: 1 rack
      • Minix/Security group: 1 rack
      • Business, Web and Media: 1 rack
  • General-purpose VM facility, e.g., SciCloud as currently managed by ITvO (VU)
  • Internal Github

Externally managed resources are notably the ones located at SURFsara:

  • Cartesius (national supercomputer);
  • LISA (national cluster);
  • HPC-Cloud;
  • Hathi Hadoop cluster.


Room ventilation should be sufficient to deal with the heat from high performance workstations; a truly “FLEX” office concept (assuming low powered laptops with docking station) is incompatible with machines like this, so maybe these would need to be located in a special room type with additional cooling.

Attention should be paid also to computing facilities (CPU and GPU) for student courses, which are therefore required for only specific periods of a year. In some cases DAS is used for these courses but providing access to this cluster to 50 students for extended periods of time really falls outside its terms of use. For example; the Webprogramming course at UvA requires 50 independent “LAMP” (Linux/Apache/MySQL/PHP) Virtual Private Servers for a period of one and a half month.


DoI does the support for its own compute infrastructure mostly itself (e.g., DAS and other Informatics-specific compute servers). FEIOG/UvA and ITvO/UvA manage additional HPC and Cloud resources, typically for broader domains than just Informatics. Most of the external resources are managed by SURFsara.

With UvA and VU Informatics joining forces, it will become attractive to provide a certain amount of general-purpose HPC capacity on-site in NU, as this also allows efficient co-location of data storage and processing. This would also make UvA and VU less vulnerable to changes in SURFsara pricing and system support policies. The migration to the NU datacenter is also a good opportunity to consolidate ad hoc compute services that were purchased for specific projects over the years; this would also reduce the overall system management effort required. A solution where a base HPC and Cloud infrastructure is available in-house, with support to scale out to external resources where needed, seems most attractive. It will then need to be considered carefully whether this is best arranged by DoI itself (as is currently the case, only with fragmented systems and support) or at a more central level.


High end workstations that exceed standard desktop specifications are typically purchased from project- or research group budget.

The DAS hardware investment is funded by NWO, with a 25% matching by participating institutes, who also take care of housing and support costs.

Computing capacity at SURFsara is typically provided on the basis of a specific project proposal, but in the past both VU and UvA have in addition (at a central university level) invested a significant yearly sum in the support and extension of the LISA cluster; this is expected to continue (UvA: 450 kEur/yr, VU: 650 kEur/yr), at least for the next few years. Policies to properly divide the compute capacity over the various departments (Informatics is certainly not the biggest consumer at the moment) are being discussed at the HPC councils of both UvA and VU.


Housing facilities for IT equipment will be offered via several dedicated server rooms and labs, but also at a smaller scale, distributed over the building.

Multiple categories of equipment and access patterns exist:

  1. servers that rarely need access (only by system administrators), this includes housing for HPC and storage;
  2. equipment occasionally accessed physically by researchers, this includes hosting of dedicated servers or custom hardware and networking equipment for research;
  3. servers from non-Informatics research institutes or from Beta IT support (e.g., housing of large-scale storage and VM hosts);
  4. servers accessed by students; e.g. the server room provided to students in the Master SNE;
  5. smaller equipment that often requires physical interaction, proximity or a special environment (e.g., robotics, visualization, virtual reality).


Categories 1 and 2 will be supported by the data center located on the top (12th) floor of the NU building, which will have a capacity of at least 24 racks. In a previous phase, the UvA and VU research server rooms have been checked and an overview of the rackspace to be provided for current and future Informatics research purposes has been made. The NU data center design was dimensioned based on this inventory.

Category Description Current location Racks Load per rack (kVA) Total load (kVA)
VU research DAS-4/DAS-5 W&N S-411 7 16 112
VU research Informatics Misc W&N S-411 3 13 39
VU research BioInformatics W&N S-411 0.5 13 6.5
VU research VM/Storage W&N S-405 2 13 26
VU research Security/Minix W&N S-405 0.5 13 6.5
VU research Network W&N S-411 1 6 6
UvA research IvI Misc SP904 D3.130 5 13 65
UvA research Vis/SNE lab SP904 D3.132 2 13 26
UvA research scratch space 1 13 13
Total 22 300

Some of the equipment in categories 1 and 2 that is currently used at Science Park is now foreseen to stay there (e.g., parts of DAS/UvA and equipment for which large scale DWDM connectivity to Science Park and/or AMS-IX/Netherlight is essential). This may leave room for a number of racks in Category 3. Details still need to be checked out, however. The reliability of the power and cooling of the NU data center is important; it will only be affordable to buy UPSes for a small part of the research equipment in there.

Category 4 includes equipment of the SNE master, this will be housed in a smaller server room in the middle of the building (probably level 7), with a capacity of about 10 racks. The power and cooling requirements will be significantly below the ones at the data center.

The remaining equipment in Category 5 typically requires little space and energy, and thus no active room cooling. They can be located either in one of the TechLabs (see below), can be part of the overall building research infrastructure (e.g., some sensor- or beacon equipment may fall in this category), or may be used directly at the working place of the researcher.

To support “green ICT” research, all NU system housing locations should be equipped with energy measuring equipment that can be monitored by the researchers.


DoI is the primary user of the NU datacenter, and a limited number of DoI employees should have full access and means to setup and manage the equipment there (in some other universities, departments are sometimes seen as a “customer” which is only provided limited access, on a per-request basis). Access to the SNE server room should be fully controlled by DoI/SNE.


There are two main options to deal with the equipment housing costs:

  • pay them from specific research/group budgets, as in current UvA server rooms and labs;
  • pay them via a lump sum from the Informatics departments budgets; but this probably needs an additional feedback mechanism to assure efficient use of the available resources. VU server rooms, including the ones for research equipment, are currently paid via a university-wide lump sum.

Maybe for NU a middle ground between these two cost options should be found, where most of the housing costs are covered by a lump sum and a smaller amount is charged for actual use, to promote cost awareness. Otherwise researchers might avoid putting servers in the appropriate place (i.e., the data center or other server rooms), and put them in their working place just to work around the excessive system housing costs, draining their research budgets.

Lab facilities

UvA and VU (more accurately the Network Institute) currently have a number of labs related to robotics, multimedia, visualization, gaming and networks, that will in an updated form come back in NU. These labs are used for research (both by staff members, students and others) and education.

We assume that the current lab spaces will be the minimal needed in the NU building (roughly 800m2). The current labs are:

Lab Current location Size Usage
Intertain Lab VU W&N S-111 120m2 lab, 35m2 back-office Ambient living research, education, presentations, PR
Game Cella' Lab VU Metropolitan building 80m2 lab, 35m2 work places, 10m2 storage Mostly research with focus on virtual/augmented reality; also some education, presentations
MediaLab VU Main building 65m2 cubicles, 40m2 large room Mostly social sciences research, questionnaires using cubicles; will probably stay in current place
VU RobotLab VU W&N P-437 40m2 Robotics research
UX/Gaming Lab UvA Science Park 80m2 Mostly ambient living based research
RoboLab UvA Science Park 100m2+ Robotics research
Visualization Lab UvA Science Park 40m2 Visualization research

Details are still being worked out; this will also depend on an evaluation of the current and planned lab usage. A big advantage of the NU building is that most labs (called “TechLabs” at VU) can be co-located on a single floor, thus providing ways to share facilities, and scale the labs based on actual usage. Currently the new Tech Labs are planned at two locations in the NU building. A new Intertain Lab, called the Iconic Lab, will be located on the first floor in the north-east corner of the building, right next to the new Library & Learning Center (LLC) of the UBVU. The other labs are now planned on the 6th floor. Exact configuration and location, except for the Iconic Lab, will have to be determined in the coming months.

Some of the facilities that will be available in the NU TechLabs are:

  • various sensor devices, beacons and actuators;
  • human-computer interaction devices, e.g., virtual reality, eye trackers;
  • presentation / visualization devices;
  • pervasive computing devices, used for education;
  • high-end gaming devices;
  • various robots;
  • drones (with safety measures to avoid accidents, e.g. a “flying cage”, discussed below).
Embedded systems lab

What is currently lacking is lab space suitable for embedded systems development, used for courses like Pervasive Computing and System Testing (which is based on embedded systems with microcontrollers). As a workaround, these are currently being done at VU in an underutilized physics lab, but for NU a permanent solution is necessary. Making it part of the Tech labs seems like a very good opportunity. Requirements are 16 large tables with a self-managed PC each (e.g., for latest Matlab and other development software), beamer, whiteboard, and lockable storage facilities for various types of support equipment like oscilloscope, low-voltage source, voltmeters. Support for setting up the equipment for lab experiments would be very useful.


There is also demand for a hacklab / tinkerspace where students can work on projects or play CTFs (computer security "Capture The Flag" games). This room should also be accessible in weekends and in the evening, like the current RobotLab at the UvA. It should serve at least 20 people, have storage facilities for special support equipment, and a beamer. Combining it with the Embedded systems lab could be attractive.

Green Lab

The VU "Green Lab" used for research and courses on software energy efficiency is currently housed inside the Media Lab in the VU main building. This is not an ideal location however. Depending on scale and access requirements (e.g., the amount of physical interaction with the equipment), in NU a better location would be to put the Green Lab equipment either in the datacenter, or make it part of the Embedded systems lab.

SNE Lab (education)

In addition there will be a lab that is different in nature and purpose. This lab has a focus on Systems Management, Networking and Security; this will be a successor of the current SNE lab at UvA and is solely focused on education. It will be located elsewhere in the building (see “Housing” above).



The total space currently used by the VU labs is about 400m2. The UvA labs use about 300m2. This does not include "back office" and desk space, so the total space needed in the NU building will be roughly 800m2. Because of the location of the Iconic Lab on the first floor, the size of that lab will be determined by the space available in that corner of the building. The exact size of the other labs compared to their current sizes can still change. Some labs, like the RoboLab, need a minimum amount of space and are more single purpose than other labs. Flexibility is THE keyword for the Tech Labs and solutions in which spaces are extendable and dividable would be best. Interchanging the use of different lab space would increase the use and usability of the Tech Labs so the setup of each specific space should be as generic as possible.

The current dimensions of a RoboCup Standard Platform League soccer field are 9.7m by 6.7m. Next to the soccer field, also space for the chess-robot (2x2m), the @home and @work arenas (2x2m) and a space evolution of robots (5x5m) is needed.

A flying cage needs not only floor-space, but also height. A multi-purpose solution would be to combine the flying cage with a basketball court on top of the building, which has a minimal dimension of 28x15m with a height of 8m. Note that facilities for the weather proof control of the experiments in this outdoor cage are needed. [Comment: Unlikely that there will be space and/or permission to do so! Does UvA have an alternative location in mind for drone experiments?]


The TechLabs (at least the robolab) should have ceilings mountings, with incorporated power and network outlets, to be able to mount lights, camera's and tracking devices. The TechLabs should have 40 inch touch-screen monitor for demonstrations.

Basically the VU Tech Labs would have a special status concerning IT. The preferred solution would be a setup like the current Game Cella' Lab. This would mean that lab space are made with plenty of power outlets and operational network ports. As the setup of a lab space can change within the hour, each lab space should have more active network ports than are ever really in use so moving equipment around is not an issue.

Generic requirements:

  • Dual power outlets every 2 meters
  • Dual network ports every 2 meters
  • Standard climate control (cooling) with the added optional extra cooling (standard option in NU)(numbers are available)
  • Open space, preferably larger space with sound-proof movable dividers so the space can be divided into smaller spaces if needed
  • Separate WIFI (see below)
  • Hard- and software will be managed by the Tech Labs. IT only offers network and purchasing support for most if not all equipment

Iconic Lab:

This lab has several purposes. First of all it will be a research and education lab, but because of its location and appearance it will also serve as a show case location for the university, the computer science departments and the Network Institute. This lab will also work closely with the new Library & Learning Center of the UBVU (preliminary collaboration document is drafted between the Network Institute and the UBVU). Because of this the Iconic Lab will have to be flashy, modern, high-tech, high-end in image. It should be a location where people invite visitors to present their research, where new technologies are shown and tested and where people can access technology and inspire them to use technology.

  • High-tech, high-end setup, decoration, etc
  • Glass walls to allow people to peek inside, preferably the walls can be blinded
  • There should be a way to close/blind the outside windows completely
  • A technical back-office
Equipment storage room

To use the labs for demonstrations, enough storage room is needed to hide unused equipment from sight. Depending on the layout of the labs, each lab could need it's own storage/backoffice space.


The Tech Labs also feature a space in which several workspaces can be found. These spaces are for the support personnel (permanent workspace) that needs to be close to the labs (ie not in the office part of the building) and offer high-end open hard- and software for any users that do not have those facilities at their normal workplace.

Tech. support

At the moment the Tech Labs at the VU have full-time support. This is both for managing equipment and for actual participation in research projects (design, create, analyze, publicize). All hard- and software will be managed by the Tech Labs, apart from some setups that are better served by a central IT support. Currently the UvA labs are supported by the Quickservice of the Technology Centre at the Science Park. At the VU Campus a comparable service should be available. For quick repairs of mechanics and electronics workbenches should be available at the labs. For 3D printing devices rooms with adequate air handling should be available.

Network Requirements

An important requirement for all the DoI lab facilities will be the possibility to freely operate the networked devices on both the wired and the wireless research networks, independent of the standard (more restrictive) networks managed by the IT department. If management of the standard VU/IT networks would be outsourced at some point, this puts an extra requirement on the external party. The preferred situation is comparable to the current Game Cella' Lab at the VU. There a separate subnet is used that is placed in the DMZ of the VU network. This ensures an open and unrestricted network for the labs and at the same time safety for the other managed networks of the university from anything from or through the labs.

The wireless network for research (both 2.4 and 5 GHz) should also be part of the same subnet of the Tech Labs, although access to the regular IT-managed Wi-Fi network should still be possible. Specific channel assignment may be necessary to reduce interference. Also interference with other wireless network technologies (especially the ones in the 2.4 GHz range, like Zigbee and Bluetooth) will need to be examined.

The connected hardware may not be able to access IT resources in the normal way due to network restrictions. Access will be restricted to facilities provided by the university for working at home. User authentication, policies (if any), etc will be handled by the Tech Labs.


The DoI lab facilities are fully managed by DoI. Effective usage of the Techlabs (both for research and education) will be coordinated by the Network Institute. The SNE lab facilities can likely be shared with other DoI groups that have similar requirements, but this will require some coordination at the DoI level.


Labs in UvA and VU are currently mostly funded by a lump sum budget. For particular projects that require very specific or expensive equipment, project money may be used. Current budget for VU Techlabs combined is 30K yearly. The maintenance and replacement of the equipment of the UvA Robotlab is currently financed from the BscKI and MscAI (budget 16K€/year).

Third party subsidy for particular labs with outreach possibilities (e.g., external PR events in a stylized room, like the "Iconic lab" mentioned above) could be an option. Designing labs that have sufficient external appeal for PR events will cost significant money. For example, the current VU Intertain lab cost about 300 kEur, since this also involved a professional design company (which shows!).

Currently the robotlab is equipped with a motion tracking system with Flex13 cameras of 1K€. The Prime-41 camera is ready for outdoor use, but has a price-tag of 6K€. For larger tracking volumes also more camera's are needed (8 camera's 8x8m, 16 camera's 12x12m, 32 camera's 18x18m).

Building Infrastructure

NU has been positioned as a “Smart and Green” building, showcasing many “Internet-of-Things” usage scenarios. A number of non-standard facilities should be part of the building infrastructure itself to accomplish this.


NU needs support for the following infrastructure all across the building:

  • Proximity sensors, like beacons, e.g., to support smart indoor localization facilities;
  • Environmental sensors like temperature, humidity, air quality;
  • (Public) room occupancy sensors;
  • Video cameras;
  • Various actuators that are programmable, responding on sensor state changes and application logic;
  • Access to the "Building Management System" (BMS); this controls the climate in the building based on settings and sensor data, a sort of glorified thermostat. With this data, you can do much more, such as system monitoring and possibly optimization. The BMS must be "open" in terms of data and settings so that researchers are able to read out the data and even change the settings.

An important overall requirement is that this equipment and the corresponding backend services (e.g., storing the sensor data and making it available for analysis) are to be considered a research infrastructure which is under control of DoI. They should specifically not be just part of a fixed closed system that is managed by FCO. Rather, much of the data from the building management system should ideally be accessible (read-only) for research purposes, using an open API. Sharing opportunities may also exist with the WiFi access points managed by IT; having anonymized data about WiFi devices that are within range of all specific access points can give a wealth of information about room occupation, traffic patterns, etc.

Part of the sensors will be wired (using ethernet, preferably using Power-over-Ethernet to simplify installation), hence require wall-ports that are available across the building, not just inside rooms. Having PoE-based outlets integrated in the ceilings is an attractive option, as The Edge building (the Deloitte building at ZuidAs) has shown; The Edge incorporates both PoE-based LED lighting and sensor equipment this way. Other sensors will have a WiFi interface, or employ another networking technology (e.g., Bluetooth Low Energy, or LoRaWAN, to support low-powered sensors both in and around the building).

Given all data that will be gathered about the NU building, it will be important to properly deal with privacy concerns of users of the NU building. This will be addressed by consultation of joint UvA/VU ethics committees and by involving DoI security and privacy researchers in the projects related to (large scale) sensor data.


FCO will manage the sensors that are part of the base Building Management System. DoI will manage additional sensor equipment and integrate information from multiple sources (FCO/BMS, DoI, external) for various novel “smart building” and “smart city” type applications. DoI will also manage storage services that are required for, e.g., historical trend analysis and prediction.


We should expect recurring yearly costs updating the sensor infrastructure of the building, to keep it technologically state-of-the-art. Sensors themselves are typically quite cheap (between EUR 10 and 100 each), but the scale at which they will be applied will mean a substantial budget will still be required. Also the manpower to install, maintain and operate a large scale setup should not be underestimated. Preferably, the building should be equipped with a mounting system that allows sensors (1) to be placed at fine-grained locations throughout the building, (2) to be interfaced to the network infrastructure and (3) to be powered.

Action items

Several of the subjects discussed above concern ICT functionality that is already present in the current UvA/VU contexts and that will be migrated to the NU building as part of the physical relocation process. However, some topics will require special action from some of the parties involved; others may provide requirements that need to be taken into account when constructing the NU building itself. These action items are listed in this section.

Subject Research/Education Handled by Priority Timing Cost
Workplace ICT support for users relocating from Science Park Research & Education UvA/VU IT; DoI (requirements); FCO "NU ZitWerk & Onderzoek" High H2 2016-H1 2017
Requirements ICT support for Education Education DoI High Q3-Q4 2016
DWDM links UvA/VU Research: SNE; also general UvA/VU IT usage UvA/VU IT; feedback from SNE; IT4NU steering group High Q3-Q4 2016? 300K + 100K yearly?
Wired & wireless network flexibility Research: many groups

Education: student access to the electronic learning environment(s)

UvA/VU IT; FCO (sufficient outlets); DoI (equipment; management of research networks) High
NU Datacenter and SNE server room design Research: CompSys, SNE, others FCO, DoI, VU/IT; IT4NU steering group High Q3-Q4 2016 TBD
High performance shared storage UvA/VU Research: many groups UvA/VU IT; DoI Medium Q4 2017? hardware cost is limited (10K) if current storage facility can be reused
Smart building design, sensor integration Research: many groups DoI; FCO (to provide flexibility for the integration) High Initial design Q3 2016? 200K to make NU smarter than planned?; 25K yearly for innovation, new sensors
TechLabs design for NU Research: robotics, multimedia, others; Education (Human Ambience, Pervasive Computing) DoI (Network Institute); FCO for part of the implementation High Q3 2016? initial design costs TBD; then 60k yearly for equipment updates?

Getting started

Consult the User's Guide for information on using the wiki software.