InfoMessage
1) Message boards : Theory Application : boinc-buda-runner V4 and BOINC 8.2.8
Message 9214
Posted 3 Dec 2025 by Yeti
In reply to computezrmle's message of 2 Dec 2025:
Forward a proxy via environment variable.
More details can be found here:
https://lhcathomedev.cern.ch/lhcathome-dev//forum_thread.php?id=682&postid=8628
https://lhcathomedev.cern.ch/lhcathome-dev/forum_thread.php?id=682&postid=8607
Thanks Stefan, but first I think it is the job of the Project to enshure that the Master-Configuration from the BOINC-Client is carefully handed over to the Project-App. This has been the same when LHC started to use VirtualBox so I think, they shouldn't do the same mistake a second tine

Second, as an interim-solutions this may be okay, but I'm to stupid to do it as described:
Yeti wrote:
In reply to computezrmle's message of 26 Mar 2025:[quote]ATM only the classical method is supported:
1. Define an environment variable, here: http_proxy=http://proxy:port
2. Export that variable, here via containers.conf
3. Create a script inside the container that reads the variable and does the necessary steps
here: add CVMFS_HTTP_PROXY="http://proxy:port;DIRECT" to /etc/cvmfs/default.local

The script is already available in the Linux app_version.
The Windows app_version just needs an update.
I tried to do this, but I was not able to follow it.

I'm running Windows 11 with Podmann, but

  1. Where should I define the environmanet variable ?
  2. Where can I find the correct containers.conf ? I have 4 of these in different path
  3. in my boinc-buda-runner I don't have a /etc/cvmfs/default.local. Couldn't find any cvmfs-directory


Any ideas ?

Yeti

Yeti
2) Message boards : Theory Application : boinc-buda-runner V4 and BOINC 8.2.8
Message 9212
Posted 2 Dec 2025 by Yeti
For me, it looks as if it is working fine,

The BOINC-Client is configured to use my local Squid-Proxy, which worked fine in the last years with ATLAS- and THEORY-Native-Tasks

Now when running Theroy (docker) 7.6.2 I get following messages in the WU-Result-File:

Could not find a local HTTP proxyCVMFS and Frontier will have to use DIRECT connectionsThis makes the application less efficientIt also puts higher load on the project serversSetting up a local HTTP proxy is highly recommendedAdvice can be found in the project forumUsing custom CVMFS.Probing CVMFS repositories ...Probing /cvmfs/alice.cern.ch... OKProbing /cvmfs/cvmfs-config.cern.ch... OKProbing /cvmfs/grid.cern.ch... OKProbing /cvmfs/sft.cern.ch... OKExcerpt from "cvmfs_config stat":VERSION HOST PROXY2.13.3.0 http://s1cern-cvmfs.openhtc.io DIRECT****************************************************************** IMPORTANT HINT(S)!******************************************************************CVMFS server: http://s1cern-cvmfs.openhtc.ioCVMFS proxy: DIRECTNo local HTTP proxy found.With this setup concurrently running containers can't sharea common CVMFS cache. A local HTTP proxy is thereforehighly recommended.More info how to configure a local HTTP proxy:https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474******************************************************************Environment HTTP proxy: not setjob: htmld=/var/www/lighttpdjob: unpack exitcode=0job: run exitcode=0job: diskusage=11636job: logsize=76 kjob: times=0m0.003s 0m0.009s6m32.049s 0m58.196sjob: cpuusage=450Job FinishedFilesystem Used Use% Mounted on===> [runRivet] Tue Dec 2 18:11:19 UTC 2025 [boinc ee zhad 206 - - pythia8 8.301 CP5 100000 408]cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.chcvmfs2 21M 1% /cvmfs/alice.cern.chcvmfs2 17M 1% /cvmfs/grid.cern.chcvmfs2 776M 20% /cvmfs/sft.cern.chtotal 814M 6% -boinc_shutdown called with exit code 0EOMstderr from container:

Could not find a local HTTP proxyCVMFS and Frontier will have to use DIRECT connectionsThis makes the application less efficientIt also puts higher load on the project serversSetting up a local HTTP proxy is highly recommendedAdvice can be found in the project forumUsing custom CVMFS.Probing CVMFS repositories ...Probing /cvmfs/alice.cern.ch... OKProbing /cvmfs/cvmfs-config.cern.ch... OKProbing /cvmfs/grid.cern.ch... OKProbing /cvmfs/sft.cern.ch... OKExcerpt from "cvmfs_config stat":VERSION HOST PROXY2.13.3.0 http://s1cern-cvmfs.openhtc.io DIRECT****************************************************************** IMPORTANT HINT(S)!******************************************************************CVMFS server: http://s1cern-cvmfs.openhtc.ioCVMFS proxy: DIRECTNo local HTTP proxy found.With this setup concurrently running containers can't sharea common CVMFS cache. A local HTTP proxy is thereforehighly recommended.More info how to configure a local HTTP proxy:https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5473https://lhcathome.cern.ch/lhcathome/forum_thread.php?id=5474******************************************************************Environment HTTP proxy: not setjob: htmld=/var/www/lighttpdjob: unpack exitcode=0job: run exitcode=0job: diskusage=11636job: logsize=76 kjob: times=0m0.003s 0m0.009s6m32.049s 0m58.196sjob: cpuusage=450Job FinishedFilesystem Used Use% Mounted on===> [runRivet] Tue Dec 2 18:11:19 UTC 2025 [boinc ee zhad 206 - - pythia8 8.301 CP5 100000 408]cvmfs2 201K 1% /cvmfs/cvmfs-config.cern.chcvmfs2 21M 1% /cvmfs/alice.cern.chcvmfs2 17M 1% /cvmfs/grid.cern.chcvmfs2 776M 20% /cvmfs/sft.cern.chtotal 814M 6% -boinc_shutdown called with exit code 0EOMstderr end

I don't know who has the responsibilty to pass the proxy-setting "down" to the docker / Theory app
3) Message boards : Theory Application : boinc-buda-runner Version 3
Message 9172
Posted 9 Oct 2025 by Yeti
In reply to Yeti's message of 8 Oct 2025:
I forgot to mention: BOINC-Central WUs seem to work together with boinc-buda-runner Version 3
Finally I found out, that they don't consume any CPU-Time and fail at the end. So, BOINC-Central doesn't work with boinc-buda-runner Vers 3
4) Message boards : Theory Application : boinc-buda-runner Version 3
Message 9171
Posted 8 Oct 2025 by Yeti
I forgot to mention: BOINC-Central WUs seem to work together with boinc-buda-runner Version 3
5) Message boards : Theory Application : boinc-buda-runner Version 3
Message 9170
Posted 8 Oct 2025 by Yeti
I was running Theory with WSL2 and boinc-buda-runner Version 2 and all seemed to be fine.

Now I have updated boinc-buda-runner to Version 3 and all Theory-WUs fail.

Examples:

https://lhcathomedev.cern.ch/lhcathome-dev/result.php?resultid=3590271

https://lhcathomedev.cern.ch/lhcathome-dev/result.php?resultid=3590261

-------------------------------------------------------------------------------------

With boinc-buda-runner I still have following problems (BOINC Client 8.2.5)

1) If WSL is not started by Hand, BOINC can't access it
2) If WSL is started manually after BOINC Client has already started, BOINC can't use it
3) If WSL is started, there stays always an open CMD-Window on the Desktop

Yeti
6) Message boards : Theory Application : Docker on Windows
Message 9164
Posted 1 Oct 2025 by Yeti
When I install the Client 8.2.5 I see this messages in the Event-Log:

WSL: registry open failed
[error] get_wsl_information(): Error -1

Why does the client than get the new Theory-podman-tasks from the server ?

So far I have nothing installed regarding Podman and WSL, but got these Theory-Podman-Tasks

I think that should be fixed
7) Message boards : General Discussion : Xtrack beam simulation v0.04
Message 9143
Posted 24 Sep 2025 by Yeti
In reply to ChelseaOilman's message of 24 Sep 2025:
What a bunch of clowns! I have over 700 tasks that will never get validated!
You shouldn't run Develop-Projects if you think so
8) Message boards : General Discussion : Xtrack beam simulation v0.04
Message 9133
Posted 24 Sep 2025 by Yeti
In Standard-LHC@Home Xtrack 0.41 is out, so I think, we can stop running 0.04.

As these 0.04 tasks are running for several days, should we abort them or let them finish ?
9) Message boards : Theory Application : BOINC client v8.2.1
Message 9018
Posted 5 Sep 2025 by Yeti
In reply to computezrmle's message of 18 Apr 2025:
In reply to maeax's message of 18 Apr 2025:
set local proxy:
wsl
CVMFS_HTTP_PROXY="http://xx.yy.zz:3128"
This is incomplete.
To avoid CVMFS configures an unwanted backup proxy (e.g. from CERN) 'DIRECT' must be added.
CVMFS_HTTP_PROXY="http://xx.yy.zz:3128;DIRECT"

I have entered this on console in wsl with the correct IP-Adress, but the next WU (fresh loaded) didn't take notice of this setting

Any idea ?
10) Message boards : Theory Application : Docker on Linux
Message 9017
Posted 4 Sep 2025 by Yeti
In reply to computezrmle's message of 26 Mar 2025:
ATM only the classical method is supported:
1. Define an environment variable, here: http_proxy=http://proxy:port
2. Export that variable, here via containers.conf
3. Create a script inside the container that reads the variable and does the necessary steps
here: add CVMFS_HTTP_PROXY="http://proxy:port;DIRECT" to /etc/cvmfs/default.local

The script is already available in the Linux app_version.
The Windows app_version just needs an update.
I tried to do this, but I was not able to follow it.

I'm running Windows 11 with Podmann, but

  1. Where should I define the environmanet variable ?
  2. Where can I find the correct containers.conf ? I have 4 of these in different path
  3. in my boinc-buda-runner I don't have a /etc/cvmfs/default.local. Couldn't find any cvmfs-directory


Any ideas ?

Yeti

11) Message boards : Theory Application : Docker on Windows
Message 9015
Posted 4 Sep 2025 by Yeti
In reply to Yeti's message of 4 Sep 2025:

1. RunTime is something with 2.200 seconds, but CPU-Time only 647 seconds or even less. Is this because of develop or is still something wrong ?
Okay, I think it is like in Life-environment, if the WU picks up something from the central-queue then it crunches this, otherwise it finishes the actual WU

So, the final question is, what to do that podman respects / uses the proxy-setting to my squid
12) Message boards : Theory Application : Docker on Windows
Message 9014
Posted 4 Sep 2025 by Yeti
Okay, I think I got it.

Remain following questions:

  1. RunTime is something with 2.200 seconds, but CPU-Time only 647 seconds or even less. Is this because of develop or is still something wrong ?

  2. The boinc-buda-runner is not started by itself, I need to manually start wsl. This keeps a terminal-window open. What can I do to keep this terminal-window hidden ?



Yeti

13) Message boards : Theory Application : Docker on Windows
Message 9013
Posted 4 Sep 2025 by Yeti
Just started my first steps with Podman, but the first Theory-WU failed:

From BOINC-LOG:

Manni

Starting BOINC client version 8.2.5 for windows_x86_64
Running under account xxxxxxxxx
...
Docker detection in boinc-buda-runner:
- cmd: podman --version
- output: F
Docker detection in boinc-buda-runner:
- cmd: docker --version
- output: F
Windows processor group 0: 24 processors
Computer name: Manni
...
OS: Microsoft Windows 11: Professional x64 Edition, (10.00.26100.00)
Memory: 63.93 GB physical, 67.93 GB virtual
Disk: 838.34 GB total, 388.78 GB free
Local time is UTC +2 hours
Usable WSL distros:
- boinc-buda-runner (WSL 2) (default)
- OS: F (F)
- BOINC WSL distro version 1

In the Result-file, I see following:

<core_client_version>8.2.5</core_client_version>
<![CDATA[
<message>
Unzul�ssige Funktion.
(0x1) - exit code 1 (0x1)</message>
<stderr_txt>
docker_wrapper config:
workdir: /boinc_slot_dir
use GPU: no
create args: --cap-add=SYS_ADMIN --device /dev/fuse
verbose: 1
wsl_init(): no usable WSL distro
wsl_init() failed: -1
2025-09-04 15:31:55 (29368): called boinc_finish(1)

</stderr_txt>
]]>

So, what is wrong ?

Yeti
14) Message boards : Sixtrack Application : XTrack suite
Message 8054
Posted 6 Apr 2023 by Yeti
They said that the first versions of xtrack are only for cpu.

Laurence on 06. Febr 2023 wrote:
The Xtrack application is a new version of Sixtrack which supports GPUs. It is currently work in progress. The current status is that the application has been deployed correctly in the server and seems to be running. We hope to continue testing within the next few weeks.


So, the question is: Is Xtrack-Suite the same as XTrack-Beam-Simulation
15) Message boards : Sixtrack Application : XTrack suite
Message 8051
Posted 4 Apr 2023 by Yeti
On what kind of GPUs will XTRack-Suite run?

My client(s) are only asking for AMD/ATI-GPU-Work, but I use NVIDIA:

19887 lhcathome-dev 04-04-2023 12:19 Sending scheduler request: To fetch work.
19888 lhcathome-dev 04-04-2023 12:19 Requesting new tasks for CPU and AMD/ATI GPU
19889 lhcathome-dev 04-04-2023 12:19 Scheduler request completed: got 0 new tasks
19890 lhcathome-dev 04-04-2023 12:19 No tasks sent
19891 lhcathome-dev 04-04-2023 12:19 No tasks are available for Xtrack beam simulation
19892 lhcathome-dev 04-04-2023 12:19 Project requested delay of 61 seconds
16) Message boards : ATLAS Application : ATLAS vbox and native 3.01
Message 8040
Posted 23 Mar 2023 by Yeti
Yeti,
it was a scientist during the generating of the tasks. It was a file not for Boinc.
You find the answer from CP in -prod.

Yeah, I had already read it, but someone should tell a word about when it will be fixed, will WUs with this mistake get cancelled from sending out or whatever

I posted it here again because I didn't want to fokus the whole community on this point
17) Message boards : ATLAS Application : ATLAS vbox and native 3.01
Message 8038
Posted 23 Mar 2023 by Yeti
No, it's still the old version in prod. It must be just new batches of tasks with large files. I'll ask the submitters why it's like this now.

Have stopped one Threadripper 3995 overnight.
80 MBit/s from ISP and 1 GBit/s Network (including Squid) running at the limit, since those 1 GByte-Atlas downloads are active.
All Atlas-Tasks finishing with Hits-File so long.

David, any news on this ?

Meanwhile I have stopped all Atlas-Downloads, these 1,2 GB for each WU are too much, since yesterday evening I have downloaded 0,4 Terrabyte only from Atlas-Servers
18) Message boards : ATLAS Application : Huge EVNT Files
Message 8034
Posted 22 Mar 2023 by Yeti
Just noticed those huge ATLAS EVNT files being downloaded from prod to different clients:
1,166,414,003 => 1.2 GB each
Same here, I got tons of it and my vDSL is overloaded:

19) Message boards : ATLAS Application : ATLAS vbox and native 3.01
Message 8033
Posted 22 Mar 2023 by Yeti
Seeing 1,08 GByte download in production.
Is this new Version transfered from -dev?
Now a second download on the same PC with 1,09GByte in production.
Application for Atlas in prod is the old one??
In Germany we say "Holland in Not".
My vDSL works at it's limit, but it seems to be overloaded with the new 1.2 GB Atlas-Tasks in Live.



Normally I have at worktime a limit for Atlas with 50 MB Download, I have opened it, but it is still not enough :-(
20) Message boards : ATLAS Application : ATLAS vbox and native 3.01
Message 8008
Posted 21 Mar 2023 by Yeti
with stabil checkpointing 2.000 events with 3.01 wouldn't be a problem for a lot of volunteers inclusive me
Next 20


©2026 CERN