sm-admin AT lists.ibiblio.org
Subject: Developer Only Discussion List
List archive
- From: Tony Smith <tony AT smee.org>
- To: sm-admin AT lists.ibiblio.org
- Subject: [SM-Admin] Perforce resource utilisation limits
- Date: Fri, 12 Mar 2004 10:06:21 +0000
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
All,
Yesterday someone (who shall remain nameless) performed a large sync of
*everything* in our repository and it essentially used 100% of the ADSL link
in our office while it was going on (the 2Mb downstream link was fine, but we
only have 256k upstream). I killed it but it was restarted shortly
afterwards. Needless to say my colleagues were unimpressed.
To prevent it from happening again, I'm going to put in place some resource
limits to limit the amount of data people can transfer in a single command.
If I get these limits wrong and commands that you think are reasonable start
getting rejected, please let me know.
I'll also be writing a spell for the perforce-proxy server which caches
revisions locally to the client and I'll ask people to use that to help keep
the bandwidth usage down a little.
Thanks for your help with this.
Tony
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)
iD8DBQFAUYuiqu4dCYpCBl0RApJVAJ4+znFxJVR6mih5JUFHl6nEEeEGPwCfe4G9
cDeKxzMnODmswD/+asOb34E=
=Cquo
-----END PGP SIGNATURE-----
-
[SM-Admin] Perforce resource utilisation limits,
Tony Smith, 03/12/2004
-
Message not available
-
Re: [SM-Admin] Perforce resource utilisation limits,
Tony Smith, 03/12/2004
-
Re: [SM-Admin] Perforce resource utilisation limits,
Terry Ross, 03/12/2004
-
Re: [SM-Admin] Perforce resource utilisation limits,
Tony Smith, 03/12/2004
- Re: [SM-Admin] Perforce resource utilisation limits, Justin Rocha (Xenith), 03/12/2004
-
Re: [SM-Admin] Perforce resource utilisation limits,
Tony Smith, 03/12/2004
-
Re: [SM-Admin] Perforce resource utilisation limits,
Terry Ross, 03/12/2004
-
Re: [SM-Admin] Perforce resource utilisation limits,
Tony Smith, 03/12/2004
-
Message not available
Archive powered by MHonArc 2.6.24.