Deprecated: Assigning the return value of new by reference is deprecated in /u/deptinfo/dalle/www/wiki2/cookbook/sourceblock.php on line 153

Warning: Cannot modify header information - headers already sent by (output started at /u/deptinfo/dalle/www/wiki2/cookbook/sourceblock.php:153) in /u/deptinfo/dalle/www/wiki2/pmwiki.php on line 885
Olivier Dalle's Corner: Main / Olivier's Home Page
Olivier Dalle's

This is a backup site. Sorry for inconvenience, some pages may be buggy.

The site hosting my research page is temperarily unavailable is back online (it is safe to follow this link) but automatic redirections are still buggy.

important<< Job(s) available to work in Mascotte !
Open position, starting Sep 2011:

  • postdoc (12 month) on Composability and reuse in component-based simulation, within the OSA project.

See details here

SIMUTools 2012
5th Intl. Conf. on Simulation Tools and Techniques
Desenzano del Garda, Italy, March 19–23 2012

1.  What I Do

↑ Contents

2.  Current Research

My current research activities focus on telecomunication networks simulation and in particular on component based modeling techniques. In this scope I used to be involved and still participate to the following projects:

2.1  Funded Projects

The INFRA-SONGS ANR Project (2012–2015)

The SONGS Project is a follow-up to the USS-SIMGRID ANR Project (see also here). The goal of the SONGS project is to extend the applicability of the SimGrid simulation framework from Grids and Peer-to-Peer systems to Clouds and High Performance Computation systems. Each type of large-scale computing system will be addressed through a set of use cases and lead by researchers recognized as experts in this area. Any sound study of such systems through simulations relies on the following pillars of simulation methodology: Efficient simulation kernel; Sound and validated models; Simulation analysis tools; Campaign simulation management. La page du WP8

The EA DISSIMINET (Associated Team) (2011–2013)

Since January 2011, the MASCOTTE project-team is an associate team with ARS Laboratory at Carleton University, Ottawa, ON (Canada). This Franco-Canadian team will advance research on the definition of new algorithms and techniques for component-based simulation using a web-services based approach. On one hand, the use of web-services is expected to solve the critical issues that pave the way toward the simulation of systems of unprecedented complexity, especially (but not exclusively) in the studies involving large networks such as Peer-to-peer networks. Web-Service oriented approaches have numerous advantages, such as allowing the reuse of existing simulators, allowing non-computer experts to merge their respective knowledge, or seamless integration of complementary services (eg. on-line storage and repositories, weather forecast, traffic, etc.). One important expected outcome of this approach is to significantly the simulation methodology in network studies, especially by enforcing the seamless reproducibility and traceability of simulation results. On the other hand, a net-centric approach of simulation based on web-services comes at the cost of added complexity and incurs new practices, both at the technical and methodological levels. The results of this common research will be integrated into both teams’ discrete-event distributed simulators: the CD++ simulator at Carleton University and the simulation middle-ware developed in the MASCOTTE EPI, called OSA, whose developments are supported by an INRIA ADT (Development Action) named OSA starting in December 2011.

The OSA project (Supported by INRIA since 2005, currently by an ADT funding, 2011-2012)

OSA stands for Open Simulation Architecture. This is a development project for a new discrete event simulation platform. The original elements of this new platform are:

  1. the integration in the same tool of a large number of the Modeling & Simuilation concerns (modeling, developments, instrumenting, …)
  2. the extensive of Component-Based Sofware Engineering (CBSE) techniques, and more particularly the Fractal component model (for example, in order to ease the reuse and replacement of parts of the platform AND models —cf this paper — )
  3. the use of Aspect Oriented Programming (AOP) techniques in order to separate concerns
  4. an open (Open Source) and modular architecture, easy to use (automatic dependencies management based on a Maven repository), inspired AND based on Eclipse
  5. a collaborative development model (forge, wiki …)

OSA v0.6 is available on the INRIA forge with a demo of Peer-to-peer storage simulation.

1 Software Engineer position available to work on this project starting Sept 2010 (1 yr, renewable). Details about how to apply soon published here.

2.2  Other projects

Binding Layers (Since Dec 2011)

Binding Layers is a new Component Architecture Model.

A software Component Architecture Model (CAM) describes a set of operating rules and mechanisms for building complex applications using a structured assembly of software components. Compared to a component model, eg. J2EE, Spring, SCA or Fractal, a CAM does NOT specify the component model itself, but builds instead on top of existing Component Models (CMs). As a result, an important property seeked in BL-CAM is genericity: BL-CAM is meant to be compliant with many Component Models.

Various approaches have been proposed so far to specify the structure of complex applications based on components, but the most popular are certainly the following:

  • Flat structures: all components lay in a common container and interact directly with each other depending on their dependencies;
  • Hierarchical structure: components can be grouped into bigger units, that can in turn be used to form even bigger units, and so on.

Boths approaches have their pros and cons: Flat structures avoid the complexity of hierarchy and therefore usually offer better performance, but at the cost of a lesser reusability and control; on contrary, hierarchical structures offer great means for reusing parts of an application, and the hierarchy provides a de facto means for building complex control and fine-tuned non-fonctionnal services. However, despite their popularity, both approaches fail to provide good means for the Separation of Concerns at the architectural level.

Binding Layers is an attempt to solve this issue by following a third, different approach. Like flat structures, BL does not suffer from a many-level hierarchy performance cost, and yet, like hierarchical structures, it allows for sophisticated grouping strategies. For this purpose, BL relies extensively on two original features: component sharing and layering by extension.

Component sharing means that a single component instance can be found in many component assemblies. Therefore, assuming that component assemblies are formed according to some common concern, component sharing allows a component to be directly part of a concern, rather than to reach for it, eg. using a complex path through the component hierarchy. A usual idiom found in other component models is to shorten this path by placing non-fonctionnal concerns in, or beside each component (eg. in the membrane of Fractal components). However, this approach creates an artificial dichotomy among components, each of which endding-up belonging to either of the two dimensions: functional or non-fonctional. On contrary, thanks to component sharing, Binding Layers support seamlessly and uniformly an arbitrary number of dimensions (including functional and non-fonctional).

Component groups formed in each dimensions are called layers. Each layer has a flat structure. However, reuse is made easy: First because the number of layers is not limited, and therefore each layer, typically in charge of a concern, can be reused independently to build new applications (eg. a persistence layer can be reused for many applications). In addition, Binding Layer offers an extension mechanism, somewhat similar to the heritage mechanism found in OO languages, that allows for incremental specializations of a given layer.

Status: work-in-progress.

See this presentation (PDF, 836 KiB) for more details.

2.3  Olivier’s SandBox

You will find on this page links to some ongoing projects, drafts, experiments.

2.4  Latest and soon coming Visitors

  • Gabriel Wainer, Carleton University, Ottawa, Canada (July 2012)
  • Joe Peters, SFU, Vancouver, Canada (June 2012)
  • Rassul Ayani, KTH, Stockholm, Sweeden (February-March 2012)
  • Gabriel Wainer, Carleton University, Ottawa, Canada (June-July 2011)

2.5  Recent talks (or soon coming)

  • “Some questions about the relations between activity and time representations”, presented at the ACTIMS Workshop in Zurich, Jan 16–18 2014.
  • Binding Layers Level 0: An abstract multi-purpose component layer, Sophia Antipolis, SCADA meeting, Nov 28 2013.
    See project description above.
    (NB: This is the latest version of a talk first given at Carleton University, Ottawa, on Oct 13, 2013.)
  • “Using TM for high-performance Discrete-Event Simulation on multi-core architectures”. Presentation at the EuroTM’2013 Workshop on Transactional Memory, Prague, April 14th 2013.
    Abstract: I recently started to investigate how TM could possibly be used for optimizing the performance of a discrete-event simulation (DES) engine on a multi-core architecture. A DES engine needs to process events in chronological order. For this purpose, it needs an efficient data structure, typically abstracted as a heap or priority queue. Therefore, my goal is to design an optimized heap-like data structure supporting concurrent multi-thread access patterns, such that multiple events can be processed in parallel by multiple threads. In DES, traditional parallelization techniques fall in two categories: either conservative, or optimistic. In the conservative approach, events are dequeued and processed in strict chronological order, which requires a synchronization protocol between the concurrent logical processes (LPs), to ensure consistency. In the optimistic approach, LPs are free to proceed and possibly violate the chronological order, but in case such a violation happens, a roll-back mechanism is used to return to the last consistent state (which requires a snapshot). The solution I am currently investigating is based on a Software Emulation Library for C++ called TBoot.STM. This library offers various transaction semantics, among which one, called invalidate-on-commit, that allows a transaction to be invalidated by the process that “suffers” the violation rather than the one that originates it. In our case, assuming that a transaction is associated to the dequeuing and processing of an event, a transaction is deemed successful when it completes without any prior event to be inserted in the heap an no earlier event is still pending. This is where building a solution based on invalidate-on-commit and transaction composition seems promising: Indeed, it seems easier to discover chronological violation when new events are inserted. In that case, all transactions that where mistakenly started too early can be invalidated. This library also provides a way for composing transactions, which could also prove to be helpful. For example, an agressively optimistic strategy could dequeue new events before the full completion of earlier events in which case the composition could be used to make the completion of later events depend on the completion of earlier ones. I am still at an early stage of this work, for which I just started experiments and performance evaluations.
  • Using Computer Simulations for Producing Scientific Results: Are We There Yet?
    Keynote presentation given at WNS3 2013, the 2013 Workshop on NS3, Cannes, France, March 5 2013
    Abstract: A rigourous scientific methodology has to follow a number of supposedly well-known principles. These principles come from as far as the ancient Greece where they started to be established by Philosophers like Aristotle; later noticeable contributions include principles edicted by Descartes, and more recently Karl Popper. All disciplines of modern Science do manage to comply with those principles with quite some rigor. All … Except maybe when they come to computer-based Science.
    Computer-based Science should not to be confused with the Computer Science discipline (a large part of which is not computer-based); It designates the corpus of scientific results obtained, in all disciplines, by means of computers, using in-silico experiments, and in particular computer simulations. Issues and flaws in computer-based Science started to be regularly pointed out in the scientific community during the last decade.
    In this talk, after a brief historical perspective, I will review some of these major issues and flaws, such as reproducibility of results or reusability and traceability of scientific software material and data. Finally I will discuss a number of ideas and techniques that are currently investigated or could possibly serve as part of candidate solutions to solve those issues and flaws.
  • On Reproducibility and Traceability of Simulation Experiments (PDF) presented at WinterSim in Berlin, Dec. 2012.
    Abstract: Reproducibility of experiments is the pillar of a rigorous scientific approach. However, simulation-based experiments often fail to meet this fundamental requirement. In this paper, we first revisit the definition of reproducibility in the context of simulation. Then, we give a comprehensive review of issues that make this highly desirable feature so difficult to obtain. Given that experimental (in-silico) science is only one of the many applications of simulation, our analysis also explores the needs and benefits of providing the simulation reproducibility property for other kinds of applications. Coming back to scientific applications, we give a few examples of solutions proposed for solving the above issues. Finally, going one step beyond reproducibility, we also discuss in our conclusion the notion of traceability and its potential use in order to improve the simulation methodology.
  • My D.E.S. is Going to Be Better Than Yours (PDF) at SFU Seminar, in Surrey Campus, Surrey, BC, May 28 2012.
    Abstract: Although provocative, this claim is often made by those who are considering the perilous project of writing their own discrete event simulator. In this talk, I will first review the pros and cons of writing a new simulator to demonstrate that there is no clear choice between writing a new simulator or reusing an existing one. Assuming that the decision to write a new simulator is eventually made, I will present a number of technical issues and some techniques that I have been using, in the last few years, to solve them. Most of these techniques involve advanced software engineering techniques and concepts, including Software Reuse, Aspect Oriented Programming, Separation of Concerns, Component Frameworks, and Architecture Description Languages.
    Then, I will introduce the Open Simulation Architecture (OSA) and its philosophy. OSA is a research project that I have been leading during the last few years, whose goal is to experiment with the various techniques described above in order to improve the simulation methodology. In response to the provocative title of this talk, I will show how OSA aims at offering a new simulator by attempting to integrate and reuse the best parts of other simulators. Finally, I will focus on the particular “layered” design used in OSA. While this concept sounds familiar, this layering is actually a unique feature that allows all of the previously mentioned concepts to fit together and serves the overall modeling and simulation methodology surprisingly well.
  • Some desired features for the DEVS ADL (PDF) at the DEVS/TMS Workshop, Boston, April 6th, 2011.
  • Invited presentation at the USS-SIMGRID workshop in Cargese (Corsica, FR), April 2010. (Some Methodology Issues and Methodology Experiments in the OSA Project - PDF slides)
  • Invited presentation at the ARS/SCS seminar at Carleton University (Ottawa, CA), August 2010 (Same slides as Cargese above).

↑ Contents

3.  Open Positions

Internship subjects available

No internship position available at the moment for working with Olivier.

Non local students must send resume and motivation letter to apply.
Local student may apply using the standard procedure.

3.1  Using transactional memory for implementing a multithreaded simulation engine

This subject is offered as a 1st year of master project for a group of 2–4 students OR as a Master 2 subject.

Heap structures are critical for the performance of many applications. One good exemple is Discrete-Event Simulation (DES), in which the events which represent the history of the system are stored randomly in such a structure during the simulation, but they have to be processed in in strict increasing order (of occurence time) to follow the chronology. In order to speed-up and/or scale-up the execution of such DES, various algorithms have been proposed for distributing simulations on multiple computers and keep the global synchronisation, using message passing. The advent of many-core architctures opens new perspectives for the parallelization of such algorithms on multiple cores, using multiple threads and shared-memory.
The goal of this intership/project is to implement and evaluate the performance of such a multi-threaded heap algorithm based on Transactional Memory. Transactional Memory is recent technique proposed to replace the use of locks and mutual exclusion in concurrent algorithms running on a shared-memory architecture. As its name suggests, it borrows the idea of transactions to the database world : when a concurrent action is needed, a transaction is initiated in memory; if the action completes without conflict (with other hreads), the transaction is committed and the new memory state is kept; if the action generates a conflict, some of the conflicting transactions have to be rolled-back (ie. the new memory state is dropped) and restarted. This new technique was proposed a few years ago, but it was only virtually available by means of software emulation. Major actors, such as Intel and IBM have recently started to build or announce hardware support for Transactional Memory in their latests products (Eg. IBM BlueGene/Q super computer already has it[1], and the next generation of Intel Processors is announced with an extension to instruction set for the support of TM[2]).
Work to do
The work to do is to investigate the use of such a TM software emulation library to implement a multi-threaded Heap data structure. The library we chose is TBoost.STM[3,4], a proposed extension to the Boost C++ library. The work to be done during the TER is the following:
  • implement or retrieve various multi-threaded heap data-stractures in C++ using algorithms:
    • without transactional memory , as found in the literature
    • with transactional memory, custom designed
  • run performance comparisons between the various implementations
    • Build a performance benchmark
    • Run experiments on a multi-core computer
C/C++ programming, experience with Posix threads and concurrent programming (locks, semaphores)
  • [1] Peter Bright. “IBM’s new transactional memory: make-or-break time for multithreaded revolution.” ARS Technica, Aug 31 2011. see here
  • [2] Peter Bright. “Transactional memory going mainstream with Intel Haswell.” ARS Technica, Feb 2012. see here
  • [3] Justin E. Gottschlich, Jeremy G. Siek, Paul J. Rogers, and Manish Vachharajani. “Toward Simplified Parallel Support in C++.” In Proceedings of the Fourth International Conference on Boost Libraries (BoostCon), May 2009. see here
  • [4] The TBoost.STM Library: see here

↑ Contents

4.  Old Stuff

↑ Contents

5.  Students

Current Students

  • Damian Vicino, 2013–2015 (co-tutelle with Carleton University; co-advisor with G. Wainer and F. Baude)

Former Students

I was happy (and lucky :-) to supervise the following PhD. students:

  • Julian Monteiro, 2007–2010 (co-advisor with S. Perennes)
Modeling and Analysis of Reliable Peer-to-Peer Storage Systems
  • Juan-Carlos Maureira, 2008–2011 (co-advisor with JC Bermond)
  • Judicael Ribault, 2008–2011
Reuse and Scalability in Modeling and Simulation Software Engineering

Recently, I also supervised the following student interships:

  • Thanh Phuong PHAM, Master 2 IFI (Ubinet), Research Internship (6mon, 2012)
  • Inza Bamba, Master 2 IFI (Ubinet), M.Sc. Research Internship (6mon, 2010)
  • Alaedin Moussa, Polytech’Marseille 2nd year, Research Initiation Internship (2mon, 2010)

↑ Contents

6.  Other Research Activities

↑ Contents

7.  Recent Bibliography (Full biblio…)

  1. Damian Vicino, Chung-Horng Lung, Gabriel Wainer and Olivier Dalle (2014) Evaluating the impact of Software-Defined Networks’ Reactive Routing on BitTorrent performance. In FNC - 9th International Conference on Future Networks and Communications. Niagara Falls, Canada. (Elhadi M. Shakshuki, Eds.) Elsevier. (URL) (BibTeX)
  2. Olivier Dalle, Damian Vicino and Gabriel Wainer (2014) A data type for discretized time representation in DEVS. In {SIMUTOOLS - 7th International Conference on Simulation Tools and Techniques}. Lisbon, Portugal, Mar. (Kalyan Perumalla, Rol and Ewald, Eds.). ICST. (URL) (PDF) (BibTeX)
  3. Damian Vicino, Gabriel Wainer and Olivier Dalle (2013) Using DEvS models to define fluid based uTP model. ACM SIGSIM PADS - Intl Workshop On Principles of Advanced and Distributed Simulation - Poster Presentation. (BibTeX)
  4. Olivier Dalle and Emilio P. Mancini (2013) NetStep: a micro-stepped distributed network simulation framework (short paper). In {SIMUTools - 6th International ICST Conference on Simulation Tools and Techniques - 2013}. Cannes, France, Mar. (Wentong Cai and Kurt Vanmechelen, Eds.). ICST. (URL) (PDF) (BibTeX)
  5. Emilio P. Mancini, Gabriel Wainer, Khaldoon Al-Zoubi and Olivier Dalle (2012) Simulation in the Cloud Using Handheld Devices. In {MSGC@CCGRID - Workshop on Modeling and Simulation on Grid and Cloud Computing - 2012}. Ottawa, Canada, May. (IEEE, Eds.) Pages 867–872, . Wainer, Gabriel and Hill, David and Taylor, Simon. (URL) (PDF) (BibTeX)
  6. Olivier Dalle and Emilio Mancini (2012) Integrated Tools for the Simulation Analysis of Peer-To-Peer Backup Systems. In Proceedings of the 2012 Intl Conference on Simulation Tools and Techniques (SIMUTOOLS 2012). Sirmione, Italy, March. (F. Quaglia and J. Himmelspach, Eds.) Pages 178–183. (PDF) (BibTeX)
  7. (2012) Themed Issue: Recent advances in parallel and distributed simulation. (Steffen Straßburger, Olivier Dalle and George F. Riley, Eds.) Palgrave MacMillan. (URL) (BibTeX)
  8. Olivier Dalle (2012) On Reproducibility and Traceability of Simulations. In Proceedings of the 2012 Winter Simulation Conference. dec. (C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose and A. M. Uhrmacher, Eds.) Page 244. (URL) (PDF) (BibTeX)
  9. Gabriel A. Wainer, Khaldoon Al-Zoubi, Olivier Dalle, David R.C. Hill, S. Mittal, J.L. Risco Mart{\’i}n, Hessam Sarjoughian, L. Touraille, Mamadou K. Traor{é} and Bernard P. Zeigler (2011) Standardizing DEVS model representation. In Discrete-Event Modeling and Simulation: Theory and Applications, G. Wainer, P. Mosterman Eds., Taylor and Francis, pages 427–458. (BibTeX)
  10. Olivier Dalle (2011) Should Simulation Products Use Software Engineering Techniques or Should They Reuse Products of Software Engineering? — Part 2. Modeling \& Simulation Magazine, 11(4).Online publication. (PDF) (BibTeX)
  11. Gabriel A. Wainer, Khaldoon Al-Zoubi, Olivier Dalle, David R.C. Hill, S. Mittal, J.L. Risco Mart{\’i}n, Hessam Sarjoughian, L. Touraille, Mamadou K. Traor{é} and Bernard P. Zeigler (2011) Standardizing DEVS Simulation Middleware. In Discrete-Event Modeling and Simulation: Theory and Applications, G. Wainer, P. Mosterman Eds., Taylor and Francis, pages 459–494. (BibTeX)
  12. Olivier Dalle and Judica{ë}l Ribault (2011) Some Desired Features for the DEVS Architecture Description Language. In Proceedings of the Symposium On Theory of Modeling and Simulation — DEVS Integrative M&S Symposium (TMS/DEVS 2011). Boston, MA, USA, April 4–9, 10p. (PDF) (BibTeX)
  13. Olivier Dalle (2011) Should Simulation Products Use Software Engineering Techniques or Should They Reuse Products of Software Engineering? — Part 1. Modeling \& Simulation Magazine, 11(3).Online publication. (PDF) (BibTeX)
  14. Emilio Mancini and Olivier Dalle (2011) Traces generation to simulate large-scale distributed applications. In Proceedings of the 2011 Winter Simulation Conference (WSC’11). Phoenix, AZ, December. (S. Jain, R. R. Creasey, J. Himmelspach, K. P. White and M. Fu, Eds.) Pages 2993 −3001. (BibTeX)