Info | Message |
---|---|
1) Message boards : Theory Application : Docker on Mac
Message 8787 Posted 29 Apr 2025 by computezrmle |
Check the permissions: ls -dhal /usr/bin dr-xr-xr-x 1 root root 62K 25. Apr 06:05 /usr/bin If 'w' is not set, run 'chmod +w /usr/bin' to allow it. Then create the link and finally remove the 'w' running 'chmod -w /usr/bin'. |
2) Message boards : Theory Application : Docker on Mac
Message 8784 Posted 28 Apr 2025 by computezrmle |
Due to a bug in BOINC it tries to execute the command "unknown" instead of "docker" or "podman". Since "unknown" does not exist you get "command not found". To make it work create a symbolic link "unknown" which points to "docker" or "podman" (whatever you have installed), like: ln -s /usr/bin/podman /usr/bin/unknown |
3) Message boards : Theory Application : integrate Docker-IP in LAN - Win11pro
Message 8763 Posted 22 Apr 2025 by computezrmle |
In reply to maeax's message of 22 Apr 2025:CVMFS_HTTP_PROXY="http://10.xxx.yyy.zzz:3128| CVMFS_HTTP_PROXY="http://10.xxx.yyy.zzz:3128" | CVMFS_HTTP_PROXY="http://10.xxx.yyy.zzz:3128|direct" This is wrong! The correct syntax is described here: https://cvmfs.readthedocs.io/en/stable/cpt-configure.html#proxy-lists Beside that it is important NOT to add DIRECT to the first proxy group! Instead, add it at the end, separated by a ';'. If you have a single local proxy, the preferred solution is to configure it via 'grid-wpad' 1) and set: CVMFS_HTTP_PROXY="auto;DIRECT" 2nd best is to configure it via the BOINC client. Last option is to configure it via the environment like this: CVMFS_HTTP_PROXY="http://p1.site.example.org:3128;DIRECT" If you have at least 2 local proxies, the preferred solution is to configure them via 'grid-wpad' 1) and set: CVMFS_HTTP_PROXY="auto;DIRECT". To configure this via BOINC is not possible. Last option is to configure them via the environment like this: CVMFS_HTTP_PROXY="http://p1.site.example.org:3128|http://p2.site.example.org:3128;DIRECT" CVMFS_PROXY_SHARD=yes That way CVMFS_HTTP_PROXY defines load balancing per client 2) between p1 and p2 as well as failover. DIRECT is only used if both proxies do not respond. 'CVMFS_PROXY_SHARD=yes' enables load balancing between p1 and p2 on request level (requires a recent CVMFS client). The recent Theory apps automatically enable CVMFS_PROXY_SHARD. The latter allows load balancing between p1 and p2 1) requires a local service 'grid-wpad' that delivers a valid wpad.dat via http://grid-wpad/wpad.dat 2) here: per task/container if the CVMFS instance on the host is used |
4) Message boards : Theory Application : BOINC client v8.2.1
Message 8751 Posted 18 Apr 2025 by computezrmle |
In reply to maeax's message of 18 Apr 2025:set local proxy: This is incomplete. To avoid CVMFS configures an unwanted backup proxy (e.g. from CERN) 'DIRECT' must be added. CVMFS_HTTP_PROXY="http://xx.yy.zz:3128;DIRECT" |
5) Message boards : Theory Application : BOINC client v8.2.1
Message 8744 Posted 17 Apr 2025 by computezrmle |
Yes. Add the missing packages in this line: dnf install -y libxcrypt-compat bc bzip2 lighttpd procps-ng make gcc which cvmfs && \ Must be: dnf install -y libxcrypt-compat bc bzip2 lighttpd procps-ng make gcc which cvmfs bind-utils netcat && \ The 'remove' error can be ignored for now but the lighttpd ipv6 issue may need a fix in the config file. Without lighttpd the proxy configuration can't be forwarded to CVMFS. |
6) Message boards : Theory Application : BOINC client v8.2.1
Message 8742 Posted 17 Apr 2025 by computezrmle |
Please post the Dockerfile sent with this app. I suspect there's something missing. |
7) Message boards : Theory Application : BOINC client v8.2.1
Message 8732 Posted 17 Apr 2025 by computezrmle |
Either use a CVMFS client on the host or manually create the directory /cvmfs and set the permissions to 777. This ensures it can be mounted from inside the container. |
8) Message boards : Theory Application : New vbox version v7.49
Message 8714 Posted 16 Apr 2025 by computezrmle |
In reply to boboviz's message of 16 Apr 2025:But still no download. Did you set '<dont_use_vbox>1</dont_use_vbox>' in cc_config.xml? To get vbox tasks this must be removed or set to <dont_use_vbox>0</dont_use_vbox> |
9) Message boards : General Discussion : Docker instead VirtualBox?
Message 8696 Posted 10 Apr 2025 by computezrmle |
In reply to boboviz's message of 10 Apr 2025:A comment from Boinc developers.. I guess you refer to this comment: https://github.com/BOINC/boinc/issues/6134#issuecomment-2740452125 If so you may have also noticed this: Wait for a couple of releases for feedback, and remove this functionality completely together with VBox support (will take a couple of years for that until all the projects that use VBox now switch to WSL/Docker). At least for now (in relation to LHC@home) - docker apps are under heavy development and not even released on prod - snapshots are not yet implemented/tested - there is no long term experience; maybe there are currently unknown pitfalls |
10) Message boards : CMS Application : New Version v61.25
Message 8693 Posted 9 Apr 2025 by computezrmle |
CMS 61.25 is from Nov 2024 and uses vboxwrapper 26208 (or a development artifact that reports 26208). On MacOS you need vboxwrapper 26210. Hence it makes no sense to run those tasks as they will all fail. The recent CMS 70.91 on prod uses vboxwrapper 26210. |
11) Message boards : Theory Application : Two theory VB tasks on macOS, one working, the other not
Message 8674 Posted 3 Apr 2025 by computezrmle |
In reply to [AF>Le_Pommier] Jerome_C2005's message of 2 Apr 2025:... You can clearly see the total amount of CPU time is 30mn (after 6 hours running), compared to 8 hours for the other one (after 9 hours running). Thanks for reporting. The process showing around 30 min CPU within 6hrs runtime most likely hangs. The scripts running inside the VM are currently under heavy development. This affects BOINC level as well as deeper levels (scientific app). Fortunately vboxwrapper 26210 seems to fix an issue on MacOS that has been introduced with the version before. |
12) Message boards : Theory Application : New version v7.44
Message 8643 Posted 27 Mar 2025 by computezrmle |
Works here. Do you use the most recent BOINC client artifact from https://github.com/BOINC/boinc? |
13) Message boards : Theory Application : Docker on Windows
Message 8638 Posted 27 Mar 2025 by computezrmle |
In reply to boboviz's message of 27 Mar 2025:But these are, anyway, validated. On dev the #invalids/errors is limited to 32 in a row (IIRC per core per computer). If a computer exceeds this limit it will not get further work for 24h (or maybe until midnight). Since those errors are usually not what we test here the tasks report a success back to BOINC. This may change once the app_version moves to prod (or maybe if people misuse dev as a prod like project). |
14) Message boards : Theory Application : Docker on Windows
Message 8636 Posted 27 Mar 2025 by computezrmle |
In reply to boboviz's message of 27 Mar 2025:In reply to Crystal Pellet's message of 27 Mar 2025: Depends on mcplots. ATM the queue sends lots of tasks that fail early. In the future there will be short tasks as well as tasks running for days, as usual because the scientific payload is more or less the same. 3. High CPU-usage during event processing part of the main process vmmem: 25% of 4 core => 1 core 20% of 12 core => 2.4 cores => 2 tasks with 120% each 20% of 16 core => 3.2 cores => 2 tasks with 160% each At least for the 12 core roughly within the normal range. |
15) Message boards : Theory Application : Docker on Windows
Message 8632 Posted 27 Mar 2025 by computezrmle |
Some remarks to the docker version: Thanks. Good to know. As for (2.), that's at least not worse than before using native. As for (3.) Does it mean you ran a scenario 'A' with 1 vbox task and no docker tasks and later you ran 'B1' with no vbox tasks and n (how many?) docker tasks? Or did you run vbox beside docker in scenario 'B2'? If you monitor the docker containers - e.g. running 'docker stats' or 'podman stats' - you may notice that some containers use far more than 100% CPU. This is because each task runs 2 processes, the mc-generator and rivetvm. Errors during downloading metadata for repository 'epel' A temporary glitch affecting the CDN where fedora hosts the mirror list. As a result the image build can't complete. This is not under CERN's control. Should work again after a few minutes when the DNS records time out and the next request gets the IP of a 'good' server. |
16) Message boards : Theory Application : Docker on Linux
Message 8628 Posted 26 Mar 2025 by computezrmle |
In reply to Toby Broom's message of 26 Mar 2025:I guess auto does not work then. Right, "auto" is not supported. At least not yet. ATM only the classical method is supported: 1. Define an environment variable, here: http_proxy=http://proxy:port 2. Export that variable, here via containers.conf 3. Create a script inside the container that reads the variable and does the necessary steps here: add CVMFS_HTTP_PROXY="http://proxy:port;DIRECT" to /etc/cvmfs/default.local The script is already available in the Linux app_version. The Windows app_version just needs an update. Edit: @Toby Broom You set "http_proxy=192.168.1.179:3128" instead of "http_proxy=http://192.168.1.179:3128". for https it must be "https_proxy=http://192.168.1.179:3128" (sic!) Due to the missing protocol CVMFS can't use the proxy and falls back to DIRECT. |
17) Message boards : Theory Application : Docker on Windows
Message 8623 Posted 26 Mar 2025 by computezrmle |
In reply to boboviz's message of 26 Mar 2025:In reply to Crystal Pellet's message of 26 Mar 2025: You find most of it in a few threads here. It's currently a moving target, hence makes no sense to write a complete documentation now. What works on platform A today may not work on platform B, so modifications will have to be tested first. |
18) Message boards : Theory Application : Docker on Windows
Message 8622 Posted 26 Mar 2025 by computezrmle |
+1The interesting part of the log is this: Mounted CVMFS in the container. job: htmld=/var/www/lighttpd job: unpack exitcode=0 job: run exitcode=1 job: diskusage=6828 job: logsize=16 k job: times= 0m0.010s 0m0.000s 0m19.678s 0m7.623s job: cpuusage=27 ===> [runRivet] Wed Mar 26 13:03:36 UTC 2025 [boinc pp z1j 8000 180 - pythia8 8.313 tune-monash13 100000 92] Job Finished It shows the log output of the scientific app. The task was very short because the scientific app failed with "job: run exitcode=1". ATM there are lots of those around but it is not related to docker. Next docker version will tail runRivet.log to the BOINC slot (already available on Linux) for easier monitoring. |
19) Message boards : Theory Application : Docker on Linux
Message 8619 Posted 26 Mar 2025 by computezrmle |
In reply to computezrmle's message of 25 Mar 2025:create_args runs without "--cap-add=SYS_ADMIN" "--device /dev/fuse" if "chmod go+w /cvmfs" is set on the host. Was testing this back and forth. Unfortunately we can't avoid "--cap-add=SYS_ADMIN" and "--device /dev/fuse" when CVMFS inside the container should be used. Hence, to simplify deployment both option should remain in the *.toml file. create_args = "--cap-add=SYS_ADMIN --device /dev/fuse -v /cvmfs:/cvmfs:shared" |
20) Message boards : Theory Application : Docker on Linux
Message 8618 Posted 26 Mar 2025 by computezrmle |
In reply to Toby Broom's message of 26 Mar 2025:got 2 that worked: +1 Your log tells you this: "Using CVMFS on the host." Hence, you need to configure the CVMFS on the host to use your local Squid. Check if this is set in /etc/cvmfs/default.local: CVMFS_HTTP_PROXY="http://your_proxy_name_or_IP:port;DIRECT" Then (while no container is running) run on the host "sudo cvmfs_config reload". To forward the proxy to your containers, set the container environment as described here: https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.php?id=682&postid=8607 |
©2025 CERN