You are here: Home News & Alerts Alerts and Malfunctions Central file server: Maintenance …

Central file server: Maintenance work (part 2) on 14.12.2023 from 6 pm

#ufrstatus #maintenance Maintenance work will take place on Thursday, 14 December 2023 (firmware update), which is expected to last between 18:00 - 01:00. Individual interruptions at different times must be expected.

Hello,

Maintenance work will take place on Thursday, 14 December 2023 (firmware update), which is expected to last between 18:00 and 01:00.
Individual interruptions at different times must be expected.
The affected services and the effects are described below.

==================
Affected services:
==================

In addition to the home directories, all services that use the central file server are affected.
These services include, among others: Ilias, web server, work group server, BSCW, NEMO, bwLehrpool, home directories, login server, shares / group drives and profiles (Windows)

==================
General effects:
==================

Depending on the type of connection of the various services, outages may last for different lengths of time.
Login, session and memory problems can therefore occur at any time during the maintenance window.
If necessary, please refer to the protocol-specific notes described below.

Each storage node is updated and restarted individually one after the other. The storage node is therefore available for the duration of the (approx. 20-30 minutes) for the duration of the update.
Services whose protocol connection automatically switches to another storage node will only be affected for a short time.
Services whose protocol connection does not allow automatic switching will therefore be unavailable for up to approx. 40 minutes.
As the individual storage nodes are updated at any time, it is not possible to determine at what point in time the individual services will be affected.

Note for home directories / shares / group drives: With these services, it can happen that the directory is temporarily unavailable and access hangs.
Depending on the timeout, access may be possible again after just a few minutes, so you simply have to wait a short time.
If access is still not possible after a longer period of time, you may have to establish a new connection manually.

Notes for the various protocols:

==================
Effect for NFSv3 customers who use
use ufr-dyn.isi1.public.ads.uni-freiburg.de
==================

Customers who integrate our storage system using NFSv3 via the ufr-dyn.isi1.public.ads.uni-freiburg.de URL should only be minimally burdened by this procedure.
The reason for this is that the IP of a storage node is automatically transferred to another node as soon as the original node is not available.
We therefore expect that only a short latency will be noticeable.

==================
Impact for all other clients (SMB + NFSv3/v4),
who use ufr.isi1.public.ads.uni-freiburg.de
==================

For all customers who integrate the storage area via ufr.isi1.public.ads.uni-freiburg.de (both SMB and NFSv3/v4), this procedure means above all that at some point the node via which the connection to the storage system exists is not available for the duration of the restart / update (approx. 30 minutes).
If necessary, a new connection to the storage system can be established manually / automatically immediately in order to connect to a new node.
This minimises downtime, although it is of course possible that a connection is established with a node that is updated later.

NFS/SMB: In the event of a hard-mount, the connection will naturally hang until the storage node is available again.

We apologise for any inconvenience this may cause and will endeavour to keep the disruption to a minimum.

With kind regards,
Your Storage Team