- 1 NU: ICT support for a Smart and Green building
- 1.1 Organization
- 1.2 The true difference
- 1.3 ICT at the workplace
- 1.4 Building and Department facilities
- 1.5 Educational Facilities
- 1.6 ICT in lecture rooms and meeting rooms
- 1.7 Network
- 1.8 Storage
- 1.9 Computing
- 1.10 Housing
- 1.11 Lab facilities
- 1.11.1 Iconic Lab
- 1.11.2 Embedded systems lab
- 1.11.3 Hacklab/Tinkerspace
- 1.11.4 Green Lab
- 1.11.5 SNE Lab (education)
- 1.11.6 Requirements
- 1.11.7 Organization
- 1.11.8 Costs
- 1.12 Smart Building Infrastructure
- 1.13 Action items
- 1.14 Conclusions
- 1.15 Getting started
NU: ICT support for a Smart and Green building
|0.5a||June, 2016||Wiki feedback UvA/VU (Giuseppe Procaccianti, Wilmar de Lange, Kees Verstoep); in progress|
|0.4b||May 30, 2016||Wiki feedback UvA/VU (Herbert Bos, Marco Otte, Natalia Silvis, Arnoud Visser, Robert Belleman, Kees Verstoep); snapshot for FCO|
|0.3||April 26, 2016||Kees Verstoep, Robert Belleman; verwerking commentaar 1e projectmeeting|
|0.2||April 18, 2016||Kees Verstoep, Robert Belleman; verwerking commentaar Boy Menist; verwerking info “The Edge”|
|0.1||April 8, 2016||Kees Verstoep, Robert Belleman|
THIS DOCUMENT IS VERY MUCH WRITTEN FROM A MINDSET THAT ADI WILL BE THE ONLY INHABITANT OF THE NU BUILDING. THIS WILL NOT BE THE CASE. WHAT ARE THE IMPLICATIONS OF DESIGN DECISIONS FOR OTHER DEPARTMENTS IN THE BUILDING? (ROB)
This document describes the requirements for the ICT infrastructure of the future new home of the Informatics departments of VU and UvA: the “New University” (NU) building at the Amsterdam ZuidAs. This building will be an important factor in realizing our research ambitions, providing an excellent environment to participate successfully in often externally funded research programs.
Past experiences with the UvA/VU Beta collaboration in the O|2 (VU) building were that substantial investments were required to make basic facilities (like building access, networking and printing) available for users from both organizations. For the O|2 building, the ambitions with respect to ICT functionality were rather limited: to provide a UvA employee with access to the same facilities previously available at Science Park. The NU building will house both Informatics departments of UvA and VU (i.e., the combined Amsterdam Department of Informatics, ADI), therefore the requirements with respect to ICT support are significantly higher.
Truly benefiting from co-located Informatics departments collaborating on many projects, making use of all state-of-the-art facilities provided by the new (NU) environment, will still require much preparation work. This document integrates feedback from previous information gathering rounds at UvA and VU, and is meant as a repository of requirements that will be used for the initial design of the ICT-related facilities in NU.
This document lists a number of important ICT-related themes, for each of which the following structure is used:
- Requirements with respect to NU;
With respect to “Organization”, it requires mentioning that UvA and VU have a somewhat different structure in how ICT is organized. Both UvA and VU have a “Central IT” department that manages the main IT infrastructure and services (which includes e.g., network, mail, storage and desktop support).
Separate from that, UvA/FNWI has a “FEIOG” group which provides more specialized ICT support for the Beta domain. At VU, an “ITvO” group exists which has similar ambitions, but then for research in general, and it is part of central VU/IT. It should be noted that FEIOG and ITvO are very small (about 6 FTE each) compared to Central IT at UvA and VU (over 200 FTE each).
At the department level ICT support exists mostly in the form of “Scientific Programmers”, which also manage part of the Informatics-specific research equipment. At UvA, most scientific programmers are grouped into the "Support Group" (OG, Ondersteunersgroep). In the remainder of the document, all Department of Informatics ICT support personnel will be grouped under the term “ADI”, although some differences exist how they are organizationally embedded at UvA and VU.
TODO: FURTHER ORGANIZATIONAL ASPECTS NEED TO BE ADDRESSED BUT ARE OUTSIDE THE SCOPE OF THIS DOCUMENT
The document ends with a section listing action items for follow-up projects that need to be completed in preparation of NU.
The true difference
The central theme of this document comes from the question:
How will The NU building truly stand out as an icon for IT research and education?
These “what will be the unique selling point?” or “how will the NU building get the appropriate image and media attention?” questions require answers on numerous aspects of the building. A first inventory of those aspects are:
- Visual presence
- Ecological presence
- Energy efficiency presence
- Multi functional presence
- IT technological presence
- IT data management presence
- IT services presence
- Virtual presence
- Augmented presence
- Collaborative presence
- Atmospheric presence
- Cultural presence
- Faciltair presence
- Sensorial presence
- Flexibility presence
- Cognitive presence
- Student satisfaction presence
- Researcher satisfaction presence
- Social media presence
- Sustainability presence
- ... To be continued/completed
The challenge for the VU (and the UvA) is to find the (sub-?)set of aspects that produces most synergy and will obtain the highest impact on image building and media attention. The description of the building in these terms will be presented with the conclusions of this document.
TODO: BE SURE TO PAY ENOUGH ATTENTION TO THE CONCLUSIONS AT THE END OF THIS DOCUMENT. KEY ASPECT IS THAT NU SHOULD BE MORE THAN A ONE TIME STATEMENT LIKE "THE EDGE." NU SHOULD BE ABLE TO KEEP THE ATTENTION AT A HIGH LEVEL THROUGHOUT ITS LIFESPAN.
ICT at the workplace
At NU, the ICT support related to desktops and on-campus IT services will (most likely) be provided by VU/IT, due to their physical proximity. However, most support services at the ZuidAs campus are currently only available via a “VU-net” web portal, which is only accessible using a VUnet-id.
We need to make sure that UvA and VU employees can effectively work together in NU and will have the same level of support. This will mean that either:
- VU-net services will also be available for UvA employees/students via UvAnet-id’s, or
- UvA employees will get an additional VUnet-id (maybe as a temporary measure; UvA students already get a VUnet-id when needed).
The experiences with existing UvA/VU collaborations like ACTA and in the new ones in O|2 will be evaluated to determine whether the solutions implemented there will also be acceptable for ADI, or alternative solutions are to be implemented.
There should be access to central productivity software: “office” software (document, spreadsheet and presentation development), software development, software revision management, project management, bug tracking system, web hosting, database management, Wiki.
TODO: THIS LIST REQUIRES FURTHER ANALYSIS BECAUSE MORE THAN AN AVARAGE NUMBER OF THESE FACILITIES ARE MANAGED AND SUPPORTED BY THE RESEARCHERS THEMSELVES.
All facilities should be supported by multiple operating systems and should also be accessible by students and staff that bring their own device (BYOD). Operating system support for Windows, Linux, MacOS, Android, iOS.
An important aspect of day-to-day work experience will be how the offices will be designed, in particular to what extent in in what form a “FLEX” concept will be applicable for some of the employees. This will also have an impact on workplace ICT support, e.g., requirement for a versatile KVM (Keyboard, Video, Mouse) solution that works for Windows, Mac and Linux laptops.
We will need to check experiences with the current solutions applied in the O|2 building. Another useful step would be to let some of the employees already before the move to NU integrate (say, for one day of the week) with a partner section at the other institute. Limitations due to insufficient cross-organizational ICT support should quickly become visible. This can then be tackled by formulating specific collaboration use cases that will need to be supported in NU.
TODO: NOT ALL CONSEQUENCES OF COLLABORATION AND COOPERATION BETWEEN VU AND UVA RESEARCHERS CAN BE OVERSEEN NOW. THIS LEEDS TO A REQUIREMENT FOR CONTINUOUS FURTHER DEVELOPMENT OF IT FACILITIES FOR VU AND UVA EMPLOYEES COMBINED.
Workplace support is done by Central IT
TODO: A PROPER HANDLING OF THESE COSTS SHOULD INCLUDE A PROPER APP STORE/ SOFTWARE BASE THAT IS HIGHLY SYNCHRONIZED, BOTH IN CONTENT AS IN COSTS, BETWEEN VU AND UVA.
Costs to develop cross-organizational ICT support will likely be substantial, as experience with the VU O|2 building has shown. However, extended functionality for UvA/VU collaborations in O|2 is already part of an ongoing project at VU/IT the coming years, and ADI at NU is likely to benefit from this.
Building and Department facilities
The combined Informatics department will be an important user of NU, but not the only one. Collaboration with other departments or faculties in the NU building are expected to be common. Each department should have access to facilities that are central to its daily operations: telephone, printer/scanner/copiers.
Access to the building should be possible outside regular office hours. To facilitate collaboration with researchers outside ADI, it should be easy to arrange (possibly permanent) access to the floors where ADI researchers are located. It should also be easy to arrange meeting rooms, both by on-line booking in advance and by ad hoc reservation.
There should be access to a sufficient number of printers, scanners and copiers, preferably integrated into the same device; assuming the ADI is divided in groups of 50 people, one multi-purpose device per group seems reasonable. All printers should be capable of full colour printing (as is the default at the VU). Scanning should support multi-page, batch scanning to digital format (e.g. PDF).
General facilities will be handled by VU/FCO (VU Campus Organization for Facilities) with support from Central IT for identity management.
The NU building should 24X7 be available for the inhabitants of the building.
NU wil be a multi purpose building; supporting research, education and leesure. The Educational facilities will replace a significant part of the W&N educational facilities. W&N currently offers lecture space consisting of 51 rooms with 2891 seats and 18 computer rooms with 462 available computers, most of them Windows based, 48 of them Linux based.
Regarding these facilities we encountered a significant difference in the UvA and VU approach. The VU still works from the idea that, during classes, each student should be able to use a PC of the VU. The VU supplies the PC's including the required software. The UvA does not supply PC's for students anymore and requires from each student that he/she has a self owned notebook ("Bring Your Own Device", or BYOD). The "computer rooms" of the UvA only deliver power lines and WIFI. The UvA supplies the students with the software they need. Where required the UvA has virtual desktops available for software that cannot be distributed to students (via SURFSpot or via an APPStore).
TODO:EXPLORE THE POSSIBILITY FOR THE VU TO CHOOSE THE SAME STUDENT WORKPLACE APPROACH AS THE UVA. WHAT POLICY CHANGE WILL THIS REQUIRE AND WHAT NEEDS TO BE DONE TO GET THAT CHANGE DECIDED AND IMPLEMENTED.
Lecture space in W&N consisting of 51 rooms with 2891 seats in total. A significant part of that capacity will be created in the NU building. Preferably these educational facilities will be as flexible as possible in possibilities to arange the furniture and supply state of the art presentation and interaction devices like interactive smart boards, video recording equipment and sufficient WIFI capacity.
The lecture and computer rooms are not the only educational facilities that NU will require. Specifying the educational IT requirements of the NU building requires an impression of the major educational developments. There we see the following developments:
- IT has taken a central role in higher education becoming the main channel for significant parts of the curriculum.
- IT supports both the educational content, the educational methods and the social aspects of the courses involved
- Education is moving toward a more personal approach
- Education is becoming more and more (inter-)active
- IT is moving toward "bring your own devices" (BYOD) support
This requires the following facilities:
- Virtual desktop facilities: virtual PC's for standard applications, virtual servers for more computing intensive tasks
- Virtual collaboration facilities: portals, storage, video, communication, etc.
- Actual collaboration facilities: team spaces with presentation- (screens / smartboards) and communication facilities.
Especially for Informatics there will be additional requirements directly related to the specific content of the Informatics courses:
- virtual desktops fully controlled by the computer science department
- play ground area with servers, databases, configurable network equipment
- professional software environments for software development, embedded system development, malware analyses, disassembler, etc.
These facilties should be obtainable from within the VU or from any form of CLOUD service or from any other future source.
The IT (department) of the VU is in a transitional state. The IT department has clearly met the limitations of the full central support paradigm and is now realigning itself with the more realistic notion that a significant part of the VU IT support and facilities are of such a diverse and specialized nature that a 100% central support solution will never work. The last years that IT has been working on the full central approach show a significant increase in "decentralised solutions that circumvent the IT department as much as possible". From the nature of the "computer science department" we find most of the "decentralized solutions" with computer science of course.
The NU building will require a new distribution of responsibilities, authorizations and competences. For the research facilities this can be channeled through the Network institute. For the educational facilities, however, neither the VU nor the UvA has an appropriate services channel/organisation available.
ICT in lecture rooms and meeting rooms
NU should support multimedia facilities in all lecture and meeting rooms: a monitor in small rooms, in large rooms a beamer of sufficiently high resolution (full HD or better) and lumen. Connectivity should be supported for standard PCs and laptops. Large lecture rooms should support audio amplification for both lecturer and laptop. A support group should be readily available in case of problems with multimedia facility in lectures rooms.
At least one meeting room should be available with high-end teleconferencing facilities to support remote collaboration; Skype and Webex are free/cheaper solutions which will suffice in most cases. Large lecture rooms rooms should be equipped with (operator-free) support to record MOOC course material. When large lecture rooms have multiple screens, both the possibility to mirror the contents on the screens and to show different content on the screens should be possible.
One or two large “info screens” should be placed in the NU central hall that show which lectures/events are taking place in which lecture rooms, news flashes, announcements of upcoming events, etc.
More detailed requirements:
- Meeting rooms: monitor, 48 inch, full HD resolution at minimum (4K preferred, cost is no longer prohibitive); connectivity:
- wired: VGA-D15, HDMI, DisplayPort
- wireless: Airplay/DLNA/Miracast
- power: outlet at the center of the table
- Possibility of deploying "Smart Whiteboards" in few rooms (e.g. https://en.wikipedia.org/wiki/Smart_Board)
- Lecture rooms: beamer, full HD resolution at minimum, connectivity:
- wired: VGA-D15, HDMI, DisplayPort
- wireless: Airplay/DLNA/Miracast
- Audio amplification; wireless microphone for lecturer, laptop audio connection
- power: outlets at each row
- Teleconferencing equipment
- Recording equipment to record MOOC videos in a designated lecture room:
- Three-point lighting
- Backgrounds: greenscreen, neutral background
- Large info screens in central hall: monitor, full HD resolution at minimum. A video wall, contributing to the High Tech image of the building, would be preferable here
Lecture and meeting rooms are managed by VU/FCO. VU/FCO has links with VU/AVC (Audio/Visual Center) which can give special attention to higher than usual requirements (AVC also provided technical support for the VU Intertain Lab). If needed, some of the audio-visual infrastructure may have to be installed by a third party.
Support for these facilities need to be centralized: one desk that solves all the problems that are encountered and sufficiently fast when the problems concerns facilities used for education.
The UVA has an intecom system operational for these cases.
Monitor: 40 inch: ~1.5kE, 60 inch: ~3kE (excl. wireless)
Beamer: ranges between 1kE and 20kE, depending on size of the projection (excl. screen).
Teleconferencing equipment: ~5kE?
Video recording studio: ~10kE
For ADI, networks are amongst the subjects of research and education. This means that researchers and students should not be hampered by the ICT facilities in the NU building. Lessons learned by the UvA Informatics Institute with the move to the FNWI building at Science Park, as well as the transition to a centralized ICTS department, have shown that this is not obvious and therefore warrants mentioning. NU should support various types of networks to accommodate current and future research projects. Besides the standard networks supported and managed by the central IT department, additional network facilities need to be allowed (including routing and switching infrastructure and physical ports in each room) that can be managed by Informatics research/support staff.
The VU campus network currently provides several zones that provide different functionality, which is linked to the status of the equipment attached to it:
- green: fully managed by VU/IT;
- red: managed by the user;
- purple: managed by the user, with no network filtering applied (for servers);
- “research network”: similar to purple, only this VU/FEW only, and predates purple.
In NU, the “research network” concept should be revived, e.g., to support ScienceDMZ functionality (see https://en.wikipedia.org/wiki/Science_DMZ_Network_Architecture), IoT devices and applications, and other ADI research scenario’s for which the standard “managed” VU/IT network would be too limiting.
More detailed requirements:
TODO: THIS PART STILL NEEDS A PROPER ESTIMATE OF THE TOTAL NETWORK CAPACITY IN NUMBER OF CONNECTIONS, NUMBER OF USERS, AMOUNT OF DATA TO TRANSFER, ETC. THIS ANALYSIS REQUIRES THE KEY FIGURES OF THE USE OF THE BUILDING WHICH ARE STILL BEING NEGOTIATED.
- High bandwidth access to the Internet, of at least 10 Gbit/s;
- The wired network should be at least CAT6-based to allow for bandwidths above 1 Gb/s;
- Support for Power-over-Ethernet (PoE);
- 802.1x support (Identity based network access control) on the regular networks
- regular ports are typically connected a standard UvA or VU VLAN, and protected from the outside world.
Wired Research network
Besides access to the regular IT-managed network, ADI has special requirements on the wired network, both as a dedicated research field and as an enabling technology:
- Lightpaths to remote resources (at Science Park and elsewhere) should be possible in all server rooms; in particular DWDM fiber connectivity for SNE research;
- Possibility to install and manage network equipment on dedicated research networks;
- Possibility to use software defined networking (SDN);
- dedicated network cables to a selection of rooms and/or access to the cable trunks so the researchers can do this themselves.
Every room should have sufficient additional ethernet ports for ADI research usage (e.g., one additional port per workplace by default);
- In most cases it will suffice to configure a specific research VLAN on this port, and a regular IT access switch can be used to enable it;
- having an on-line tool allowing ADI personnel to change the VLAN assignment on the research ports is highly desirable;
- the IT and research ports should be labelled or coloured differently to clearly tell them apart;
- It should be possible to add an additional switch or router to a research wall outlet for specific experimental usage, or because of insufficient research ports (e.g., to support many Internet-of-Things devices);
- In some special cases (e.g., to support SDN scenarios), the research port may physically need to be directly connected to a research switching infrastructure, i.e. this may require additional cabling;
- this infrastructure will be managed by ADI support personnel;
- the research access switch may then need to be placed in location separate from the regular SER hosting the IT-managed access switches.
TODO: CONCRETE AFSPRAKEN MET BERT VOORBRAAK EN BOB VAN GRAFT MAKEN VOOR HET OPLEVEREN VAN DEZE VOORZIENINGEN. IN DE IMPLEMENTATIE OOK EEN PROOF OF CONCEPT C.Q. EEN PILOT MEENEMEN.'
- The standard (IT-supported) WiFi network should also easily be available for ad hoc guest usage, not requiring administrative preparation in advance.
Wireless Research Networks
ADI also requires a WiFi network to be available for research purposes, e.g., to allow high density of devices in some areas, to support robots, drones and Internet-of-things devices; also direct access to monitoring (e.g, to provide indoor location assistance). Basically two solutions exist:
- Some special "ADI-research" SSIDs need to be made available on the standard WiFi infrastructure in NU on which devices can be registered (ad hoc) on a research VLAN.
- virtually no additional cost, except for the development of the device registration tool
- no additional interference issues
- management access (e.g., for monitoring) will not be possible, unless special arrangements are made for ADI to get (read-only) access to the WiFi management layer
- The alternative would be setting up a physically separate WiFi network for research purposes
- Full access to all management facilities of the research WiFi
- To avoid interference of the general WiFi and the research WiFi networks, some policies regarding (e.g.) channel usage may need to be agreed upon (with enough space between them, i.e. 1, 6 and 11), potentially lowering performance, both for 2.4GHz and 5GHz
- It is expensive to deploy a physically separate research WiFi network all through the building (doubling cost);
- A small scale research WiFi restricts benefits of full control to physically smaller areas, limiting its potential purpose.
Apart from WiFI, there are several other new wireless network technologies that ADI may want to experiment with:
- LoRaWAN for IoT is a good example; there currently is an experimental setup in the VU W&N building;
- LIFI, communications via light in every room is another good example.
TODO: THIS HIGH LEVEL OVERVIEW PICTURE REQUIRES MUCH FURTHER SPECIFICATION IN TERMS OF FACILITIES, SERVICES, OWNERSHIP, AUTHORISATION, ETC. THE TECHNICAL REALIZATION OF THIS ARCHITECTURE SHOULD CAREFULLY BE ANALYZED. OBJECTIVE IS TO DELIVER A MAXIMUM OF FLEXIBILITY AND CONTROL FOR THE RESEARCHERS WITHOUT SPENDING TOO MUCH MONEY ON DUPLICATED FACILITIES LIKE SPECIFIC ROUTERS FOR THE DIFFERENT NETWORK SEGMENTS.
The standard wired/wireless networks will be managed by Central IT. The research-specific networks will be managed by ADI.
The SNE research group at the UvA Informatics Institute currently has access to a high-speed DWDM fiber network to support their research. This has been made possible largely because of the proximity to SURFsara and AMS-IX at Science Park. A DWDM solution between UvA and VU data centers is currently being investigated. As this would significantly benefit the UvA/VU IT support departments themselves, it may be expected they can cover most of the base costs, while the additional equipment to make this suitable for networking research could be funded by ADI.
The additional wall outlets should not be very expensive, as this is part of a new installation; adding ports at a later time would typically be much more expensive.
Guest usage of WiFi is already being rolled out in the new VU O|2 building, so no additional development costs are to be expected here.
The costs for an independent “research” WiFi installation would depend on the scale at which this would be deployed: at a few specific places only (e.g., in the labs, see below), or in the entire NU building.
UvA and VU provide various data storage options:
- Central IT storage with limited capacity per user (typically tens of gigabytes maximum), which is tied to the regular UvAnet-id/VUnet-id account;
- SURFdrive provides data sharing across university boundaries, but given the capacity (typically gigabytes per user) and protocols supported, it is not suitable as a high performance data storage and sharing facility;
- Large-scale storage for specific projects (e.g., VU offers “SciStor”, a 200 TB storage facility accessible/mountable via VUnet-id credentials; UvA provides a Faculty Storage Service accessible by UvAnetID); this provides reliable storage at a price point that is close enough to bare disk hardware costs, so there is no reason for researchers to buy and manage their own small RAID boxes; This storage should be accessible by all participants of a centrain project of program.
- HPC compute resources (see below) usually have their own large scale storage solution integrated, however, this is typically just for internal use on the system itself.
- software repository functionality and other up to date software development support software (like Atlasian).
- Several research groups at UvA have installed their own servers equipped with storage.
With UvA and VU researchers collaborating intensively inside NU, it is clear that the current storage and data sharing options will be insufficient.
For example, what is still missing, is a high performance, high capacity, data storage facility that supports projects that involve both VU and UvA members (SURFdrive is neither high capacity nor high performance). A pilot project to provide such a system is currently being started.
In addition, at UvA/FNWI a facility is constructed within the project “Research Data Management” (RDM). A decision by the board of UvA in december 2014 requires that all UvA/HvA research units manage research data according to a “Research Data Management Plan”. Specification of proper RDM guidelines are also on the agenda at VU, and under discussion in the Beta VU IT committee. Given the ADI collaborations in NU, upcoming UvA/VU RDM guidelines should be properly aligned. For an overview of the UvA RDM policy and guidelines, see http://rdm.uva.nl/.
Central IT provides support for regular storage. Support for high performance, high capacity storage will most likely be provided by Beta IT support (UvA/FEIOG and VU/ITvO).
Small scale storage is charged at a fixed rate per employee, with additional costs when the default quotum is insufficient. However, the costs per Gigabyte are relatively high (VU/IT charges EUR 4.20 per additional Gigabyte per year). Large scale storage is much cheaper on average (for comparison, SciStor is charged EUR 0.25 per Gigabyte per year); it is typically charged per project, based on storage capacity reserved for the project.
The Faculty Storage Service at UvA costs EUR 110 per Terabyte per year without replicas, EUR 125 per Terabyte per year with replicas (if there is sufficient demand), up to a maximum of 100TB (beyond that, specific solutions will be designed). Requests up to 10TB can be honoured quickly. Costs for a combined UvA/VU storage facility to be developed should be similar.
Apart from processing capacity provided by regular desktops and laptops, many Informatics researchers depend on various other computing resources, e.g., for the important Big Data and High Performance Computing (HPC) domains. Some of these resources are housed locally and managed by Informatics support staff, and some are housed and managed externally.
Locally managed resources are:
- High-end workstations, sometimes equipped with GPUs (some specifically for computing instead of visualization, some specifically for visualization); typically managed by the researchers themselves; The capability of the building to allow and support these kind of workstations includes a cooling capacity that is higher than average. Typically a workstation generates about 1 KW 99% converted to heath.
- DAS-4 and DAS-5, which are both at UvA and VU, but are also part of a larger national distributed system for Informatics research; DAS is not a general-purpose "production" HPC service, however (it also serves a secondary role for teaching purposes);
- General-purpose compute servers managed by central IT, FEIOG (UvA) or ITvO (VU), or the research groups themselves.
- SNE: 2 racks (including one optics rack)
- ILPS: 2 racks
- CSL+ISLA+CSA: 0.5 racks
- BioInformatics: 1 rack
- Minix/Security group: 1 rack
- Business, Web and Media + Knowledge Representation & Reasoning: 1 rack
- Software and Services: 1 rack, located in the main building (Green Lab)
- General-purpose VM facility, e.g., SciCloud as currently managed by ITvO (VU)
- Internal Github
Externally managed resources are notably the ones located at SURFsara:
- Cartesius (national supercomputer);
- LISA (national cluster);
- Hathi Hadoop cluster.
Room ventilation should be sufficient to deal with the heat from high performance workstations; a truly “FLEX” office concept (assuming low powered laptops with docking station) is incompatible with machines like this, so maybe these would need to be located in a special room type with additional cooling.
Access procedures should remain the same. E.g., right now, persons from the KRR and W&M groups who are in charge of server management can directly access the server room, or are at most 1 step away from someone who can provide access. This simple and fast means of physical server access should be possible in the new building as well, to allow hardware or network issues to be quickly and easily resolved by members of the group.
ADI does the support for its own compute infrastructure mostly itself (e.g., DAS and other Informatics-specific compute servers). FEIOG/UvA and ITvO/UvA manage additional HPC and Cloud resources, typically for broader domains than just Informatics. Most of the external resources are managed by SURFsara.
With UvA and VU Informatics joining forces, it will become attractive to provide a certain amount of general-purpose HPC capacity on-site in NU, as this also allows efficient co-location of data storage and processing. This would also make UvA and VU less vulnerable to changes in SURFsara pricing and system support policies. The migration to the NU datacenter is also a good opportunity to consolidate ad hoc compute services that were purchased for specific projects over the years; this would also reduce the overall system management effort required. A solution where a base HPC and Cloud infrastructure is available in-house, with support to scale out to external resources where needed, seems most attractive. It will then need to be considered carefully whether this is best arranged by ADI itself (as is currently the case, only with fragmented systems and support) or at a more central level.
High end workstations that exceed standard desktop specifications are typically purchased from project- or research group budget.
The DAS hardware investment is funded by NWO, with a 25% matching by participating institutes, who also take care of housing and support costs.
Computing capacity at SURFsara is typically provided on the basis of a specific project proposal, but in the past both VU and UvA have in addition (at a central university level) invested a significant yearly sum in the support and extension of the LISA cluster; this is expected to continue (UvA: 450 kEur/yr, VU: 650 kEur/yr), at least for the next few years. Policies to properly divide the compute capacity over the various departments (Informatics is certainly not the biggest consumer at the moment) are being discussed at the HPC councils of both UvA and VU.
Housing facilities for IT equipment will be offered via several dedicated server rooms and labs, but also at a smaller scale, distributed over the building.
Multiple categories of equipment and access patterns exist:
- servers that rarely need access (only by system administrators), this includes housing for HPC and storage;
- equipment occasionally accessed physically by researchers, this includes hosting of dedicated servers or custom hardware and networking equipment for research;
- servers from non-Informatics research institutes or from Beta IT support (e.g., housing of large-scale storage and VM hosts);
- servers accessed by students; e.g. the server room provided to students in the Master SNE;
- smaller equipment that often requires physical interaction, proximity or a special environment (e.g., robotics, visualization, virtual reality).
Categories 1 and 2 will be supported by the data center located on the top (12th) floor of the NU building, which will have a capacity of at least 24 racks. In a previous phase, the UvA and VU research server rooms have been checked and an overview of the rackspace to be provided for current and future Informatics research purposes has been made. The NU data center design was dimensioned based on this inventory.
|Category||Description||Current location||Racks||Load per rack (kVA)||Total load (kVA)|
|VU research||DAS-4/DAS-5||W&N S-411||7||16||112|
|VU research||Informatics Misc||W&N S-411||3||13||39|
|VU research||BioInformatics||W&N S-411||0.5||13||6.5|
|VU research||VM/Storage||W&N S-405||2||13||26|
|VU research||Security/Minix||W&N S-405||0.5||13||6.5|
|VU research||Network||W&N S-411||1||6||6|
|UvA research||IvI Misc||SP904 D3.130||5||13||65|
|UvA research||Vis/SNE lab||SP904 D3.132||2||13||26|
|UvA research||scratch space||1||13||13|
TODO: VERIFY THIS DATA WITH THE OWNERS AND SIMILAR DATA ELSEWHERE IN THIS DOCUMENT.
Some of the equipment in categories 1 and 2 that is currently used at Science Park is now foreseen to stay there (e.g., parts of DAS/UvA and equipment for which large scale DWDM connectivity to Science Park and/or AMS-IX/Netherlight is essential). This may leave room for a number of racks in Category 3. Details still need to be checked out, however. The reliability of the power and cooling of the NU data center is important; it will only be affordable to buy UPSes for a small part of the research equipment in there.
Category 4 includes equipment of the SNE master, this will be housed in a smaller server room in the middle of the building (probably level 7), with a capacity of about 10 racks. The power and cooling requirements will be significantly below the ones at the data center.
The remaining equipment in Category 5 typically requires little space and energy, and thus no active room cooling. They can be located either in one of the TechLabs (see below), can be part of the overall building research infrastructure (e.g., some sensor- or beacon equipment may fall in this category), or may be used directly at the working place of the researcher.
To support “green ICT” research, all NU system housing locations should be equipped with energy measuring equipment that can be monitored by the researchers.
ADI is the primary user of the NU datacenter, and a limited number of ADI employees should have full access and means to setup and manage the equipment there (in some other universities, departments are sometimes seen as a “customer” which is only provided limited access, on a per-request basis). Access to the SNE research server room should be fully controlled by ADI/SNE research staff. The lab for the SNE educational master should be accessible by students enlisted for this master as well as the teaching staff.
There are two main options to deal with the equipment housing costs:
- pay them from specific research/group budgets, as in current UvA server rooms and labs;
- pay them via a lump sum from the Informatics departments budgets; but this probably needs an additional feedback mechanism to assure efficient use of the available resources. VU server rooms, including the ones for research equipment, are currently paid via a university-wide lump sum.
Maybe for NU a middle ground between these two cost options should be found, where most of the housing costs are covered by a lump sum and a smaller amount is charged for actual use, to promote cost awareness. Otherwise researchers might avoid putting servers in the appropriate place (i.e., the data center or other server rooms), and put them in their working place just to work around the excessive system housing costs, draining their research budgets.
UvA and VU (more accurately the Network Institute) currently have a number of labs related to robotics, multimedia, visualization, gaming and networks, that will in an updated form come back in NU. These labs are used for research (both by staff members, students and others) and education.
TODO: AN EVALUATION OF THE CURRENT NETWORK INSTITUTE FACILITIES OR AN UPDATE OF THE NETWORK INSTITUTE VISION AND AMBITION IS REQUIRED
We assume that the current lab spaces will be the minimal needed in the NU building (roughly 800m2). The current labs are:
|Intertain Lab||VU W&N S-111||120m2 lab, 35m2 back-office||Ambient living research, education, presentations, PR|
|Game Cella' Lab||VU Metropolitan building||80m2 lab, 35m2 work places, 10m2 storage||Mostly research with focus on virtual/augmented reality; also some education, presentations|
|MediaLab||VU Main building||65m2 cubicles, 40m2 large room||Mostly social sciences research, questionnaires using cubicles; will probably stay in current place. Currently hosts the Green Lab as well.|
|VU RobotLab||VU W&N P-437||40m2||Robotics research|
|UX/Gaming Lab||UvA Science Park||80m2||Mostly ambient living based research|
|RoboLab||UvA Science Park||100m2+||Robotics research|
|Visualization Lab||UvA Science Park||40m2 (shared with SNE Research Lab)||Visualization research|
|SNE Research Lab||UvA Science Park||40m2 (shared with Visualization research Lab)||Networks research|
|IAS Lab||UvA Science Park||50m2||Electronics, tinker/makerlab|
Details are still being worked out; this will also depend on an evaluation of the current and planned lab usage. A big advantage of the NU building is that most labs (called “TechLabs” at VU) can be co-located on a single floor, thus providing ways to share facilities, and scale the labs based on actual usage. Currently the new Tech Labs are planned at two locations in the NU building. A new Intertain Lab, called the Iconic Lab, will be located on the first floor in the north-east corner of the building, right next to the new Library & Learning Center (LLC) of the UBVU. The other labs are now planned on the 6th floor. Exact configuration and location, except for the Iconic Lab, will have to be determined in the coming months.
Some of the facilities that will be available in the NU TechLabs are:
- various sensor devices, beacons and actuators;
- human-computer interaction devices, e.g., virtual reality, eye trackers;
- presentation / visualization devices;
- pervasive computing devices, used for education;
- high-end gaming devices;
- various robots;
- drones (with safety measures to avoid accidents, e.g. a “flying cage”, discussed below).
The Iconic lab has several purposes. First of all it will be a research and education lab, but because of its location and appearance it will also serve as a show case location for the university, the computer science departments and the Network Institute. This lab will also work closely with the new Library & Learning Center of the UBVU (preliminary collaboration document is drafted between the Network Institute and the UBVU). Because of this the Iconic Lab will have to be flashy, modern, high-tech, high-end in image. It should be a location where people invite visitors to present their research, where new technologies are shown and tested and where people can access technology and inspire them to use technology.
- High-tech, high-end setup, decoration, etc
- Glass walls to allow people to peek inside, preferably the walls can be blinded
- There should be a way to close/blind the outside windows completely
- A technical back-office
Embedded systems lab
What is currently lacking is lab space suitable for embedded systems development, used for courses like Pervasive Computing and System Testing (which is based on embedded systems with microcontrollers). As a workaround, these are currently being done at VU in an underutilized physics lab (WN-U130), but for NU a permanent solution is necessary. Making it part of the Tech labs seems like a very good opportunity. Requirements are 16 large tables with a self-managed PC each (e.g., for latest Matlab and other development software), beamer, whiteboard, and lockable storage facilities for various types of support equipment like oscilloscope, low-voltage source, voltmeters. Support for setting up the equipment for lab experiments would be very useful. There currently is a lab like this at UvA: C3.161 (~50m^2, managed by Edwin Steffens).
There is also demand for a hacklab / tinkerspace where students can work on projects or play CTFs (computer security "Capture The Flag" games). This room should also be accessible in weekends and in the evening, like the current RobotLab at the UvA. It should serve at least 20 people, have storage facilities for special support equipment, and a beamer. Combining it with the Embedded systems lab could be attractive.
The VU "Green Lab" is used for research and courses on software analytics, experimentation, and energy efficiency. It is also used for research experiments in collaboration with industrial partners and planned to host postgraduate education (PGE) courses. As a workaround, the Green Lab and its server rack are currently hosted in the Media Lab in the VU main building. For NU a permanent solution is necessary, both for lab room and server racks. The lab room should be accessible in weekends and in the evening, like the current RobotLab and the Hacklab (for Hackathons and PGE). It should serve at least 30 people, have storage facilities for special support equipment (e.g. mobile devices, power meters for experiments), and a beamer. The server rack(s) can be located in the NU datacenter, provided that the servers can be physically accessed by researchers. This solution requires full remote accessibility via a specific DMZ subnet (see Network requirements below) with a number of hosts proportional to the number of students and experiments (e.g. 4 VMs per student).
SNE Lab (education)
In addition there will be a lab that is different in nature and purpose. This lab has a focus on Systems Management, Networking and Security; this will be a successor of the current SNE lab at UvA and is solely focused on education. It will be located elsewhere in the building (see “Housing” above).
The total space currently used by the VU labs is about 400m2. The UvA labs use about 300m2. This does not include "back office" and desk space, so the total space needed in the NU building will be roughly 800m2. Because of the location of the Iconic Lab on the first floor, the size of that lab will be determined by the space available in that corner of the building. The exact size of the other labs compared to their current sizes can still change. Some labs, like the RoboLab, need a minimum amount of space and are more single purpose than other labs. Flexibility is THE keyword for the Tech Labs and solutions in which spaces are extendable and dividable would be best. Interchanging the use of different lab space would increase the use and usability of the Tech Labs so the setup of each specific space should be as generic as possible.
The current dimensions of a RoboCup Standard Platform League soccer field are 9.7m by 6.7m. Next to the soccer field, also space for the chess-robot (2x2m), the @home and @work arenas (2x2m) and a space evolution of robots (5x5m) is needed.
A flying cage needs not only floor-space, but also height. A multi-purpose solution would be to combine the flying cage with a basketball court on top of the building, which has a minimal dimension of 28x15m with a height of 8m. Note that facilities for the weather proof control of the experiments in this outdoor cage are needed. [Comment: Unlikely that there will be space and/or permission to do so! Does UvA have an alternative location in mind for drone experiments?]
The TechLabs (at least the robolab) should have ceilings mountings, with incorporated power and network outlets, to be able to mount lights, camera's and tracking devices. The TechLabs should have 40 inch touch-screen monitor for demonstrations.
Basically the VU Tech Labs would have a special status concerning IT. The preferred solution would be a setup like the current Game Cella' Lab. This would mean that lab space are made with plenty of power outlets and operational network ports. As the setup of a lab space can change within the hour, each lab space should have more active network ports than are ever really in use so moving equipment around is not an issue.
- Dual power outlets every 2 meters
- Dual network ports every 2 meters
- Standard climate control (cooling) with the added optional extra cooling (standard option in NU)(numbers are available)
- Open space, preferably larger space with sound-proof movable dividers so the space can be divided into smaller spaces if needed
- Separate WIFI (see below)
- Hard- and software will be managed by the Tech Labs. IT only offers network and purchasing support for most if not all equipment
Equipment storage room
To use the labs for demonstrations, enough storage room is needed to hide unused equipment from sight. Depending on the layout of the labs, each lab could need it's own storage/backoffice space.
The Tech Labs also feature a space in which several workspaces can be found. These spaces are for the support personnel (permanent workspace) that needs to be close to the labs (ie not in the office part of the building) and offer high-end open hard- and software for any users that do not have those facilities at their normal workplace.
At the moment the Tech Labs at the VU have full-time support. This is both for managing equipment and for actual participation in research projects (design, create, analyze, publicize). All hard- and software will be managed by the Tech Labs, apart from some setups that are better served by a central IT support. Currently the UvA labs are supported by the Quickservice of the Technology Centre at the Science Park. At the VU Campus a comparable service should be available. For quick repairs of mechanics and electronics workbenches should be available at the labs. For 3D printing devices rooms with adequate air handling should be available.
An important requirement for all the ADI lab facilities will be the possibility to freely operate the networked devices on both the wired and the wireless research networks, independent of the standard (more restrictive) networks managed by the IT department. If management of the standard VU/IT networks would be outsourced at some point, this puts an extra requirement on the external party. The preferred situation is comparable to the current Game Cella' Lab at the VU. There a separate subnet is used that is placed in the DMZ of the VU network. This ensures an open and unrestricted network for the labs and at the same time safety for the other managed networks of the university from anything from or through the labs.
The wireless network for research (both 2.4 and 5 GHz) should also be part of the same subnet of the Tech Labs, although access to the regular IT-managed Wi-Fi network should still be possible. Specific channel assignment may be necessary to reduce interference. Also interference with other wireless network technologies (especially the ones in the 2.4 GHz range, like Zigbee and Bluetooth) will need to be examined.
The connected hardware may not be able to access IT resources in the normal way due to network restrictions. Access will be restricted to facilities provided by the university for working at home. User authentication, policies (if any), etc will be handled by the Tech Labs.
The ADI lab facilities are fully managed by ADI. Effective usage of the Techlabs (both for research and education) will be coordinated by the Network Institute. The SNE lab facilities can likely be shared with other ADI groups that have similar requirements, but this will require some coordination at the ADI level.
Labs in UvA and VU are currently mostly funded by a lump sum budget. For particular projects that require very specific or expensive equipment, project money may be used. Current budget for VU Techlabs combined is 30K yearly. The maintenance and replacement of the equipment of the UvA Robotlab is currently financed from the BscKI and MscAI (budget 16K€/year).
Third party subsidy for particular labs with outreach possibilities (e.g., external PR events in a stylized room, like the "Iconic lab" mentioned above) could be an option. Designing labs that have sufficient external appeal for PR events will cost significant money. For example, the current VU Intertain lab cost about 300 kEur, since this also involved a professional design company (which shows!).
Currently the robotlab is equipped with a motion tracking system with Flex13 cameras of 1K€. The Prime-41 camera is ready for outdoor use, but has a price-tag of 6K€. For larger tracking volumes also more camera's are needed (8 camera's 8x8m, 16 camera's 12x12m, 32 camera's 18x18m).
Smart Building Infrastructure
NU has been positioned as a “Smart and Green” building, showcasing many “Internet-of-Things” usage scenarios. A number of non-standard facilities should be part of the building infrastructure itself to accomplish this.
TODO: THIS PART SHOULD BE EXTENDED WITH THE REQUIREMENTS FOR FLEXIBLE SENSORS FACILITIES IN TERMS THAT REGULAR UPDATE OF THE INSTALLED SENSORS WILL BE REQUIRED TO KEEP "NU" UP TO DATE WITH THE AVAILABLE SENOR TECHNOLOGY. TRY TO GET INFORMED ABOUT THE BIBRO NETWORK IN THE VU AND THE LESSONS WE CAN LEARN FROM THAT NETWORK.
NU needs support for the following infrastructure all across the building:
- Proximity sensors, like beacons, e.g., to support smart indoor localization facilities and contextual applications;
- Environmental sensors like temperature, humidity, air quality (O2/CO2, dust), sound (power), light (lumen);
- (Public) room occupancy sensors;
- Video cameras;
- Various actuators that are programmable, responding on sensor state changes and application logic, e.g. lights on/off (colour?), temperature up/down, sun shields open/closed;
- Access to the "Building Management System" (BMS); this controls the climate in the building based on settings and sensor data, a sort of glorified thermostat. With this data, you can do much more, such as system monitoring and possibly optimization. The BMS must be "open" in terms of data and settings so that researchers are able to read out the data and even change the settings.
An important overall requirement is that this equipment and the corresponding backend services (e.g., storing the sensor data and making it available for analysis) are to be considered a research infrastructure which is under control of ADI. They should specifically not be just part of a fixed closed system that is managed by FCO. Rather, much of the data from the building management system should ideally be accessible (read-only) for research purposes, using an open API. Sharing opportunities may also exist with the WiFi access points managed by IT; having anonymized data about WiFi devices that are within range of all specific access points can give a wealth of information about room occupation, traffic patterns, etc.
Part of the sensors will be wired (using ethernet, preferably using Power-over-Ethernet to simplify installation), hence require wall-ports that are available across the building, not just inside rooms. Having PoE-based outlets integrated in the ceilings is an attractive option, as The Edge building (the Deloitte building at ZuidAs) has shown; The Edge incorporates both PoE-based LED lighting and sensor equipment this way. Other sensors will have a WiFi interface, or employ another networking technology (e.g., Bluetooth Low Energy, or LoRaWAN, to support low-powered sensors both in and around the building).
Work is currently underway to build a prototype setup based on the latest Raspberry Pi 3 in combination with a Grove board providing access to various useful sensors via a flexible sensor cable system. The Raspberry Pi 3 device itself integrates network access by means of Ethernet, WiFi, and BlueTooth (including BlueTooth Low Energy, "BLE"). For a more detailed proposal, see this document (29 June 2016) by Rob Belleman and Kees Verstoep on
Given all data that will be gathered about the NU building, it will be important to properly deal with privacy concerns of users of the NU building. This will be addressed by consultation of joint UvA/VU ethics committees and by involving ADI security and privacy researchers in the projects related to (large scale) sensor data.
FCO will manage the sensors that are part of the base Building Management System. ADI will manage additional sensor equipment and integrate information from multiple sources (FCO/BMS, ADI, external) for various novel “smart building” and “smart city” type applications. ADI will also manage storage services that are required for, e.g., historical trend analysis and prediction.
We should expect recurring yearly costs updating the sensor infrastructure of the building, to keep it technologically state-of-the-art. Sensors are typically quite cheap (e.g., between EUR 50 and EUR 100 for an Arduino/RasperryPi base device providing network connectivity and between EUR 5 and EUR 15 per hardware sensor attached to it), but the scale at which they will be applied will mean a substantial budget will still be required. Also the manpower to install, maintain and operate a large scale setup should not be underestimated. Preferably, the building should be equipped with a mounting system that allows sensors (1) to be placed at fine-grained locations throughout the building, (2) to be interfaced to the network infrastructure and (3) to be powered.
Several of the subjects discussed above concern ICT functionality that is already present in the current UvA/VU contexts and that will be migrated to the NU building as part of the physical relocation process. However, some topics will require special action from some of the parties involved; others may provide requirements that need to be taken into account when constructing the NU building itself. These action items are listed in this section.
|Workplace ICT support for users relocating from Science Park||Research & Education||UvA/VU IT; ADI (requirements); FCO "NU ZitWerk & Onderzoek"||High||H2 2016-H1 2017|
|Requirements ICT support for Education||Education||ADI||High||Q3-Q4 2016|
|DWDM links UvA/VU||Research: SNE; also general UvA/VU IT usage||UvA/VU IT; feedback from SNE; IT4NU steering group||High||Q3-Q4 2016?||300K + 100K yearly?|
|Wired & wireless network flexibility||Research: many groups
Education: student access to the electronic learning environment(s)
|UvA/VU IT; FCO (sufficient outlets); ADI (equipment; management of research networks)||High|
|NU Datacenter and SNE server room design||Research: CompSys, SNE, others||FCO, ADI, VU/IT; IT4NU steering group||High||Q3-Q4 2016||TBD|
|High performance shared storage UvA/VU||Research: many groups||UvA/VU IT; ADI||Medium||Q4 2017?||hardware cost is limited (10K) if current storage facility can be reused|
|Smart building design, sensor integration||Research: many groups||ADI; FCO (to provide flexibility for the integration)||High||Initial design Q3 2016?||350K for smart sensors + 150K integration costs?; 25K yearly for innovation, new sensors|
|TechLabs design for NU||Research: robotics, multimedia, others; Education (Human Ambience, Pervasive Computing)||ADI (Network Institute); FCO for part of the implementation||High||Q3 2016?||initial design costs TBD; then 60k yearly for equipment updates?|
The NU building should distinguish itself from the crowd in the following manner:
- The NU building will actually be one big "IT Lab", continuously gathering data about all aspects of the use of the building by it's inhabitants, and making that data available for those inhabitants. This means that the NU building will have a "living" ecology of sensors that are updated continiously following the state of art of sensor technology on one hand and facilitating control over (aspects of) the building for the inhabitants on the other hand. Objective is to include as much data as possible in this It facility of the NU building.
- The NU building will also have the characteristics of "one big server room". The developments of IT hardware, like hyperconverged architectures and roomcorner mini supercomputers, as a counterbalans to cloud en virtual developments, indicates that the flexibility of the building concerning location and support of computerhardware will need to be very high. Preferably the computer scientist will allways have the choice to put their hardware at precise the location they prefer; their own room, a small "server cupboard" in the corridor, a central server room in the building or external in a private or even public cloud.
- The NU building should have, upon entry, the look and feel of an IT research buildings. IT laboratoria will be highly visible on entry of the building. Building information and facilities will be as modern as possible and capable of being kept up to date with the development of technology.
- the NU building should also have a virtual / augmented presence upon entering the building. It should be possible to access the NU IT environment with as much technologies as possible both in information gathering and in the controlling sense.
- The Virtual Presence of the NU building needs to go much wider than the building itself since the virtual world is not bound tot place and time anyhow. The prefered profile of the NU building should include an innovative presence that will attract a lot of attention. The IBM Watson cloud facilities may be a good example for such a virtual presence.
Consult the User's Guide for information on using the wiki software.