[suggest] meep-mpi, h5utils

Kristian Medri kmedri at doe.carleton.ca
Sat Feb 20 02:08:32 CET 2010


Running both the holey waveguide cavity and 1D Fabry-Perot examples on our
cluster has shown that though certain functions run more quickly the overall
execution time is slower for these test cases. When I come across a more
suitable example I will post it.

-----Original Message-----
From: suggest-bounces at lists.rpmforge.net
[mailto:suggest-bounces at lists.rpmforge.net] On Behalf Of
kmedri at doe.carleton.ca
Sent: February 19, 2010 10:22 AM
To: suggest at lists.rpmforge.net
Subject: RE: [suggest] meep-mpi, h5utils

This is what I do right now and it works with MPI to make use of  
multiple cores on a single machine:

http://www.doe.carleton.ca/~kmedri/research/centosmeepinstall.html

Regarding multiple machines, I will test some more once the install on  
the machines in our cluster has been homologated.

Quoting "Yury V. Zaytsev" <yury at shurup.com>:

> Hi!
>
> Did you find a way of building it with MPI yet? To me this is what seems
> to be a major issue. I will try to look into HDF5 ASAP since I might
> need it for a project of mine anyway.
>
> --
> Sincerely yours,
> Yury V. Zaytsev
>
> On Thu, 2010-02-18 at 18:13 -0500, Kristian Medri wrote:
>> As I've posted here:
>>
>>
http://thread.gmane.org/gmane.comp.science.electromagnetism.meep.general/307
>> 7/focus=3083
>>
>> I am curious as to what kind of run times other people can achieve (code,
>> commands, and execution times available in that thread).
>>
>> -----Original Message-----
>> From: suggest-bounces at lists.rpmforge.net
>> [mailto:suggest-bounces at lists.rpmforge.net] On Behalf Of
>> kmedri at doe.carleton.ca
>> Sent: February 8, 2010 12:08 AM
>> To: suggest at lists.rpmforge.net
>> Subject: RE: [suggest] meep-mpi, h5utils
>>
>> Progress:
>>
>> Used:
>> http://www.doe.carleton.ca/~kmedri/research/h5utils-1.12.1.tar.gz
>> http://www.doe.carleton.ca/~kmedri/research/h5utils.spec
>>
>> To build:
>> http://www.doe.carleton.ca/~kmedri/research/h5utils-1.12.1-2.x86_64.rpm
>>
http://www.doe.carleton.ca/~kmedri/research/h5utils-debuginfo-1.12.1-2.x86_6
>> 4.rpm
>>
>> Used:
>> http://www.doe.carleton.ca/~kmedri/research/harminv-1.3.1.tar.gz
>> http://www.doe.carleton.ca/~kmedri/research/harminv.spec
>>
>> To build:
>> http://www.doe.carleton.ca/~kmedri/research/harminv-1.3.1-16.x86_64.rpm
>>
http://www.doe.carleton.ca/~kmedri/research/harminv-debuginfo-1.3.1-16.x86_6
>> 4.rpm
>>
http://www.doe.carleton.ca/~kmedri/research/harminv-devel-1.3.1-16.x86_64.rp
>> m
>>
>> Tried:
>> http://www.doe.carleton.ca/~kmedri/research/libctl-3.1.tar.gz
>> http://www.doe.carleton.ca/~kmedri/research/libctl.spec
>> but need to consider the following instead of the newest rpms available?
>> http://www.doe.carleton.ca/~kmedri/research/libtool-2.2.6a.tar.gz
>> weird since when not using the spec I do not recall running into this
>> problem when installing it from source. It is a optional front end
>> (though used by many including the authors of meep), thus could
>> continue by removing it from BuildRequires:
>>
>> Tried:
>> http://www.doe.carleton.ca/~kmedri/research/meep-1.0.3.tar.gz
>> http://www.doe.carleton.ca/~kmedri/research/meep.spec.fromSuSE as
meep.spec
>> but File not found by glob:
/var/tmp/meep-1.0.3-2-root-root/usr/lib64/*.so.*
>> I will try to obtain a newer spec file from a SuSE SRPM and see if
>> this persist since using
>> http://www.doe.carleton.ca/~kmedri/research/meep.spec.old as meep.spec
>> Builds:
>> http://www.doe.carleton.ca/~kmedri/research/meep-1.0.3-1.x86_64.rpm
>>
http://www.doe.carleton.ca/~kmedri/research/meep-debuginfo-1.0.3-1.x86_64.rp
>> m
>>
>> Quoting "Yury V. Zaytsev" <yury at shurup.com>:
>>
>> > On Fri, 2010-02-05 at 12:43 -0500, Kristian Medri wrote:
>> >> How about:
>> >>
>> >> http://www.doe.carleton.ca/~kmedri/research/meep.spec.SuSE
>> >>
>> >
>> > Hmmm... this SPEC looks more or less good to me, can be adopted with
>> > minor changes if no dependency issues arise.
>> >
>> >> If I am to continue making my own specific meep.spec in order to
>> configure
>> >> --with-mpi, is it okay to set Authority: dag on mine as well?
>> >
>> > Authority is normally the person who manages the committed SPECs
withing
>> > the RPMForge repo. In this case it will be me ("yury").
>> >
>> >> Or is there some way of using the above and setting it to configure
>> >>  --with-mpi?
>> >
>> > (I have never ever used/built anything requiring MPI before.)
>> >
>> > Normally you should be able to change
>> >
>> > %configure F77=gfortran --enable-shared
>> >
>> > to
>> >
>> > %configure F77=gfortran --enable-shared --with-mpi
>> >
>> > add corresponding build requirements
>> >
>> > BuildRequires: mpich2 mpich2-devel
>> >
>> > and it should work.
>> >
>> > --
>> > Sincerely yours,
>> > Yury V. Zaytsev
>> >
>> >
>> >
>>
>>
>>
>> ----------------------------------------------------------------
>> This message was sent using IMP, the Internet Messaging Program.
>>
>>
>> _______________________________________________
>> suggest mailing list
>> suggest at lists.rpmforge.net
>> http://lists.rpmforge.net/mailman/listinfo/suggest
>>
>> _______________________________________________
>> suggest mailing list
>> suggest at lists.rpmforge.net
>> http://lists.rpmforge.net/mailman/listinfo/suggest
>
>
>



----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.


_______________________________________________
suggest mailing list
suggest at lists.rpmforge.net
http://lists.rpmforge.net/mailman/listinfo/suggest




More information about the users mailing list