Message boards :
ATLAS Application :
Testing CentOS 7 vbox image
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6
Author | Message |
---|---|
Send message Joined: 28 Jul 16 Posts: 484 Credit: 394,839 RAC: 0 |
You may also test if the top service automatically restarts if you hit 'q' or 'CTRL-c'. But be aware: The systemd defaults allow not more than 5 restarts within 10 seconds. If this rate limit is exceeded the service will not (automatically) restart any more. |
Send message Joined: 20 Apr 16 Posts: 180 Credit: 1,355,327 RAC: 0 |
There is now an experimental new version of console 2 available from computezrmle, unfortunately it doesn't work for me but I have put some WU in here for debugging. These are short test tasks with 10 events. |
Send message Joined: 13 Feb 15 Posts: 1188 Credit: 862,257 RAC: 15 |
montty2: permission denied |
Send message Joined: 20 Apr 16 Posts: 180 Credit: 1,355,327 RAC: 0 |
After some fixes from computezrmle it works for me now: |
Send message Joined: 13 Feb 15 Posts: 1188 Credit: 862,257 RAC: 15 |
After some fixes from computezrmle it works for me now: I want to see that myself ;) Kudos to computezrmle !! btw: When I have no new tasks in queue and ask a new one, everytime the huge 0.84 vdi is downloaded again. Any idea?? |
Send message Joined: 20 Apr 16 Posts: 180 Credit: 1,355,327 RAC: 0 |
btw: When I have no new tasks in queue and ask a new one, everytime the huge 0.84 vdi is downloaded again. Any idea?? I also see this whenever I reconnect to the dev project after not running for a while, but I'm doing a steady stream of work it doesn't download it again. Not sure what the problem is but maybe it's due to re-activating deprecated versions. |
Send message Joined: 28 Jul 16 Posts: 484 Credit: 394,839 RAC: 0 |
I don't see this issue. Before I started the monitoring test I did a project reset, got the recent vdi and have a dry buffer while I think about changes in the script. When I get fresh work the already downloaded vdi is used. |
Send message Joined: 20 Apr 16 Posts: 180 Credit: 1,355,327 RAC: 0 |
I put some real tasks in now, and it looks good: Only 7 hours left to go... |
Send message Joined: 28 Jul 16 Posts: 484 Credit: 394,839 RAC: 0 |
I got a 6-core with 200 events. 11 events done. Runtimes 2474-3730 s per event (!!) Time left: 1d 10h 20m |
Send message Joined: 28 Jul 16 Posts: 484 Credit: 394,839 RAC: 0 |
I got a 6-core with 200 events. That task is still running. It just finished an event after 4302 s. 182 events done. Still 3h to go. |
Send message Joined: 13 Apr 15 Posts: 138 Credit: 2,969,210 RAC: 0 |
196 (0x000000C4) EXIT_DISK_LIMIT_EXCEEDED Quite annoying after 4+ days as it otherwise ran fine. Logs show 2019-11-01 23:36:09 (1796): Guest Log: HITS file was successfully produced 2019-11-01 23:36:42 (1796): Guest Log: Successfully finished the ATLAS job! and 2019-11-01 23:36:49 (1796): Guest Log: *** Success! Shutting down the machine. *** so it would seem the error happened during post-processing. Forgot to mention I'm also seeing the full download if there is a break in work. If work is contiguous it only downloads the stuff to run the new job but if there is a break between finishing and uploading a job and requesting a new one, the vdi is downloaded again. I've not checked if the vdi is being deleted on completion or if it is being overwritten. I'll look 2moro. |
Send message Joined: 22 Apr 16 Posts: 677 Credit: 2,002,766 RAC: 0 |
This is from my last successful Atlas-task in -dev: 2019-10-24 22:59:19 (12188): Guest Log: total 449184 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 209689841 Oct 23 20:17 ATLAS.root_0 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 244613206 Oct 24 20:56 HITS.pool.root.1 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 8748 Oct 23 20:17 init_data.xml 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 268662 Oct 23 20:14 input.tar.gz 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 5355520 Oct 24 2019 result.tar.gz 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 815 Oct 23 20:14 RTE.tar.gz 2019-10-24 22:59:19 (12188): Guest Log: -rwxrwxrwx. 1 root root 8659 Oct 23 20:14 start_atlas.sh 2019-10-24 22:59:19 (12188): Guest Log: *** Success! Shutting down the machine. *** This is from your unsuccessful task- 18.79 GByte result-file against 5.36 from my task. 2019-11-01 23:36:48 (1796): Guest Log: total 459028 2019-11-01 23:36:48 (1796): Guest Log: -rwxrwxrwx. 1 root root 209392385 Oct 28 18:51 ATLAS.root_0 2019-11-01 23:36:48 (1796): Guest Log: -rwxrwxrwx. 1 root root 241559602 Nov 1 23:32 HITS.pool.root.1 2019-11-01 23:36:48 (1796): Guest Log: -rwxrwxrwx. 1 root root 5923 Oct 28 18:51 init_data.xml 2019-11-01 23:36:48 (1796): Guest Log: -rwxrwxrwx. 1 root root 268665 Oct 28 07:44 input.tar.gz 2019-11-01 23:36:48 (1796): Guest Log: -rwxrwxrwx. 1 root root 18790400 Nov 1 2019 result.tar.gz 2019-11-01 23:36:48 (1796): Guest Log: -rwxrwxrwx. 1 root root 815 Oct 28 07:44 RTE.tar.gz 2019-11-01 23:36:49 (1796): Guest Log: -rwxrwxrwx. 1 root root 8659 Oct 28 07:44 start_atlas.sh 2019-11-01 23:36:49 (1796): Guest Log: *** Success! Shutting down the machine. *** Maybe, you reached a limit because of 8 GByte RAM, or other limitations of the ending files in Boinc or Windows. Very painful, after such a long time running successful. |
©2024 CERN