Saturday, 29 September 2012

Iris van Herpen: how to make clothes with the prototyping

Recently we talked about prototyping applied to the museum context. Maybe for those who are not accustomed to such matters, this argument can seem quite difficult to understand.

In particular, it may be difficult to imagine a 3D printer that works to build dioramas. The idea of ​​diorama, in fact, remained quite frozen in the representations housed behind the windows of the natural history museums of the nineteenth century and much of the twentieth century.

These are hand-crafted dioramas, often made using materials taken from the natural environments of origin of the animals of which is meant to represent the habitat. Today, the dioramas are designed differently, meeting even the most modern artistic aspect in some successful demonstrations.

To aid people to understand, I'll give an example of prototyping completely divorced from the museal context. In fact, I will speak of the use of prototyping in fashion, illustrating the work of Iris van Herpen.

Iris van Herpen, born in Holland in 1984, became known for his clothes really special: two of her cherished customers are Lady Gaga and Björk.

After a sketch of the dress, Van Herpen drapes it on a virtual model and then entrusts the result to Materialise, a Belgian company that makes 3D prints. As she says, during this process, his tailoring is transformed into a laboratory where creation, emotion and technology fuse together to give a really particular aesthetic vision.

What impressed the fashion world, it was the perfect combination of craftsmanship knowledge and applications of new technologies. The end result is a suit that surprises for its consistency somewhere between the organic and the synthetic way. A synthetic way that seems to come from the future.

It is no coincidence that TIME Magazine has included her clothes among the 50 best innovations of 2011.

Her models are composed of different materials ranging from rubber to plastic. It's interesting the comment of the stylist about her "first time" with the prototyping:
The first time I used 3D printing, it completely changed my thinking. It freed me from all physical limitations. Suddenly, every complex structure was possible and I could create more detail than I ever could by hand.
Instead of depleting her creativity, prototyping expands it, opening new horizons. If this was possible in the field of fashion, why can't it could do in the area of ​​cultural heritage?

Friday, 21 September 2012

Extreme SfM: fast data acquisition and particular light condition

Hi all,
this post reports some technical informations regarding Cicero's article about Converting pictures into a 3D mesh with PPT, MeshLab and Blender.

The experiment to digitally document in 3D a statue of the Egyptian Museum in Torino (IT) was aimed to hardly test the potentialities of SfM techniques in archaeology. 
The idea just came when I was visiting the exposition with my wife Kathi, during an holiday: I asked the guardians if it was possible to take photos and they answerd me that there were no problems as long as I was not using the flash.
As you can see from the picture below, I found a perfect situation when I reached the first statue's room; the athmosphere was charming, while the sculptures were in darkness, with only a spot light to make them stand out. 

Spot light condition

This particular light condition was a good test for SfM techniques because I could not modify it "artificially" (e.g. with other lamps to get a better illumination of my subject). Moreover I could not use the flash, so I had to turn the ISO of my Nikon D70 to the maximum value, in order to be able to take pictures without the help of a tripod.
Another difficulty arose from the necessity to acquire the data quickly, without disturbing the visit of the other tourists.

The croweded room
Anyway, having increased the ISO of the camera, it was not a problem to collect all the data in just a couple of minutes.

Once home, I tried to do a 3D digital model with SfM and IBM techniques, using Pierre Moulon's PPT. Since I did not think to succeed in my purpose, I did just a fast 3D model, with low quality parameters (scaling all the pictures on a medium resolution). Contrary to what I thougth the model was accurate enought and the experimentation went on thanks to the collaboration of Cicero Moraes, who was able to recreate an high quality texture, using the methodology he described in his post.

The 3D low quality model in Paraview


This article was possible thanks to the kindness of Dott.ssa Paola Matossi L'Orsa and Dott.ssa Sara Caramello and with the permission from the "Fondazione Museo delle Antichità Egizie di Torino".

Thursday, 20 September 2012

Good ideas, new technologies and museum experiences

The digitization of finds, as we saw in a previous post, has both advantages and disadvantages from different points of view. Leaving aside problems related to the economic sphere of the possible beneficiaries of their business (in the broadest sense), it is clear, however, that the technology that concerns the digitization has legitimized their use to a wider audience.

An audience that otherwise, perhaps by choice or particular socio-cultural conditions, would not get in touch with the same finds and, in return, to the sciences that study them.

The museum exhibitions and the educational exhibits are among the main beneficiaries of this new approach. It's a fact that an exhibition like Homo sapiens, which was held recently in Rome, would not have had the same impact on visitors if it had not made use of multimedia and interactive tools of a certain importance.

It is not just to amaze the visitor, but to help him to obtain, from its interaction with the exhibition, a real experience of the visit. Without this experience, in fact, hardly the visitor will reflect deeply on what he has seen and he will not derive useful advantage to his daily life. An advantage represented by an idea to be applied in his field of work, rather than the sheer relief given by the experience of beauty / interesting aspect.

The exhibition "Homo sapiens" is an example that can be misleading if we think, in general, to the usefulness of these technologies in the museum context. An example misleading because of the large investments and of spaces that were not the common ones.

Collaborating with local museums, I often deal with problems of exposure of the findings. The scarcity of economic resources that small museums often have to deal with and that basically is inherent to the very idea of ​​a small museum (economic crisis or not) does not allow "easy" solutions that often coincide with high costs.

I recently reflected upon the relationship between technology and costs of the exhibitions, when I thought to the use of dioramas for a particular museum. The construction of models, when there is the possibility, is entrusted to specialized firms, sometimes to skilled artisans, more often to willing people who deal with it as a hobby and have a certain skill.

In this process, the technology is almost zero: the digitization and prototyping of models do not seem to be one of the basic tools to simplify the work of mounting. However, their potential is evident and the costs are relatively affordable, although it is to recognize that there is an objective problem of dissemination of this knowledge, which seem yet cabalistic material for the initiated few.

Build a diorama using rapid prototyping allows a considerable saving of time and costs, in addition to providing the basic material for the merchandising of a museum. If there is indeed a product segment that captures the imagination of the visitor is the object able to represent the visit just completed and that reminds him of the physical place where the visit took place (I omit here the anthropology of souvenirs which plays, however, its important part in this (re)cognition).

It's true that this process happens if the visit becomes experience and, therefore, as mentioned above, the visitor feels that he has acquired / purchased something applicable in other contexts. Therefore it's necessary, beyond the technological tools to be used, not forget that the visitor is at the center of everything, and that the finds in itself is only a means to educate him. “Education”, here, is not only the assimilation of information, but the opening of a new feeling about the topics covered.

So, without a good idea, the technology amazes without creating surprising experience, but without technology a good idea turns into a sterile list of information on a wall panel.

The link between idea and technology is in the experimental use (again, experience is the main part of the learning process and application) of the tools available, in a feedback process that goes from the idea to technology and vice versa. That is what has been experienced by our hominid ancestors with the feedback circuit formed by hand and brain, which created the language and has allowed both the biological and cultural evolution.

And the exhibits, in a certain way, are nothing more than a continuation of this process.

Sunday, 16 September 2012

Converting pictures into a 3D mesh with PPT, MeshLab and Blender

Obs.: Please, read the article with technical information that is important and complementary about the technique, place and manner that the photographs were taken.

SfM is a powerful technology that allow us convert a picture sequence in a points cloud.

MeshLab is a usefull software in 3D scanning tool, that is in constant development and can be used to reconstruct a points cloud in a 3D mesh.

Blender is a most popular opensource software of modeling and animation , with a intuitive UV mapping process.

When we joint the three software they allow us create a complete solution of picture scanning.

The process will be described superficially for that already have a few knowledge about the tools used to do this reconstruction.

First of all, was needed a group of pictures that was converted in a points cloud with Python Photogrammetry Tools.

The picture was taken without flash. This make the process harder in the future, when is needed use the image how reference to create the relief of the surface.

MeshLab was used to convert the points cloud into a 3D mesh with Poison reconstruction.

The surface was painted with vertex color.

The 3d mesh and the points cloud was imported in Blender.

The points cloud was imported because it have the information about the cameras point (orange points).

Using this points was possible placed the camera in the right position.

The vanishing points was matched using the focla distance of the camera. But, how we can see in the image above the mesh didnt match with the reference picture.

To point the camera was needed to orbit it manually.

Blender have a good group of UV Mapping tools. It is possible to use only the interest region of the picture to make a final texture map, how we can see in the infographic above.

So, in this process each viewpoint texture was projected using a picture. Above we can see in right the original image, and in the left the mesh with the projected texture. This appears to be perfect because the viewpoint of the camera is the same viewpoint of the picture.

But, if the 3D scene is orbited, we can attest that the projection works well only in the last viewpoint.

So, a good way to make the final texture is using the viewpoint of the picture to paint only the interest area.

When the scene is orbited we can attest that only the interest area was painted.

The surface have to be painted using some viewpoints, to complete bit by bit the entire texture.

We can see the finished process above. It isn't needed using all pictures taken to build the final texture. Depending on complexity of the model inly four images will be needed to complete the entire texture.

Now we can compare the texture process and the vertex paint process. In this case the texture process was more interesting to be used.

The resulted mesh have a hight level of details and nevertheless can be used to be viewed in realtime (see the video in the top).

To increase the mesh quality, we can use the Displacement Modifier in Blender. It project the relief of the surface using the texture how reference.
The final result:

This article was possible thanks to the kindness of Dott.ssa Paola Matossi L'Orsa and Dott.ssa Sara Caramello and with the permission from the "Fondazione Museo delle Antichità Egizie di Torino".

Wednesday, 12 September 2012

Young anthropologists meeting in Florence

Hi all,
just a fast post to advice that tomorrow will start the firts italian meeting of the "Young anthrologists" in Florence (September 13-14, 2012). The event is under the patronage of the AAI (Associazione Antropologica Italiana) and of the ISItA (Istituto Italiano di Antropologia); it will take place in the Anthropology Laboratories of the Department of Evolutionary Biology "Leo Pardi" (Florence University). Here is the official program of the conference. We (Arc-Team) will partecipate with a contribution of Cicero Moraes, Giuseppe Naponiello and Silvia Rezza ("A sperimental methodology of craniofacial digital reconstruction with FLOSS")  and during the final discussion about "Open Source e Open Data in italian anthropology and archaeology", with a presentation of Alessandro Bezzi and Luca Bezzi ("Anthropology and Open Source, the experience of Arc-Team".

The official logo

Friday, 7 September 2012

Building an Xcopter

Hi all,
last week i tried to re-build our xcopter. The model I definitively destroyed was assembled with the help of an expert in aircraft models (Walter Gilli). The mainboard is a kkMultiCopter Controller sold by, which is based on Rolf R Bakke's original PCB (public domain). the others parts are:
  • 1 power distribution board, 
  • 1 lipo battery, 
  • 1 low  voltage alarm
  • 4 brushless outrunner motors, 
  • 4 ESCs (speed controller), 
  • 2 counter propellers, 
  • 2 noncounter propellers, 
  • some silicon wire pieces, connectors and leads,
  • a homemade frame composed of 4 aluminum arms.
I put the first prototype on the "operating table" (see picture below) and started to remove individual parts to reassemble them into the new xcopter.

The first step was to create the electrical network using the power distribution board (picture below) which allows to transmit electricity from the lipo battery to the motors. A switch simplifies the turning on/off of the xcopter.

The second step was to create a plate where fix the mainboard and the receiver of the remote control. I modified an empty box of CD/DVD (picture below).

Then I started to remove the ESCs and the motors from the first prototype and to solder them into the new model (picture below).

I was careful to respect the order of the xcopter schema: type of propellers and rotation of the motors (picture below).

Finally I fixed the the mainboard and the receiver of the remote control on the CD/DVD box (picture below).

The picture below shows the "operating table" after the "transplant" procedure.

I closed the top with the CD/DVD box-cover (picture below) and I was ready for the first flight. The remote control was correctly set up with the first prototype; I needed only to regulate a little bit the Roll and Pitch pot on the mainboard. Have Fun!

Tuesday, 4 September 2012

SfM/IBM of old data

Hi all,
i was organizing data of a old storage media and i found some pictures of a work we did in Aramus excavation during the 2006 season.  The documentation of a walled-door was an hard test for 2D digital documentation ("metodo Aramus"). The picture below reproduces the logistic difficulty to take pictures usable for a photomosaic: due to the morphology of the site it was not possible to be in front of the wall.

Finally we took 14 photos to document an area which could be covered by only one image under normal conditions. The schema below shows the different area taken up in the 14 photos: it is bigger in the upper stripe and obviously smaller in the lowest.

A selection of the 14 photos is represented in the image below.

On the field we took also a group of images (14) from different point of view. We intended to elaborate the photo set with the software Stereo. In the end we didn't elaborate it because the 2D photomosaic reached a good quality and a sufficient accuracy. Stereo's data elaboration is time consuming and it depend totally to human work. the picture below shows six photos taken for 3D documentation.

After six years i found this data again and i tried to elaborate them using Python Photogrammetry Toolbox which is not time consuming because the artificial intelligence leads automatically the process. The result is an accurate 3D model. Is surprising that pictures taken two years before the development of Bundler could be used to create precise documentation of no more accessible archaeological context. The movie below shows the mesh of the walled-door.

Thank to Sandra Heinsch and Walter Kuntner (University of Innsbruck - Institut für Alte Geschichte und Altorientalistik) to share the data.

Sunday, 2 September 2012

Converting a Video of Computed Tomography into a 3d Mesh

 Obs.: Please, watch this video before to read the article.

CT scan is an excellent technology for research in a lot of areas. unfortunately is a extensive service to be contracted.

If you are a researcher in egyptian archaeology or facial reconstruction this article will help you to learn a way to get CT scan in a easy way.

An archaeological example of the use of the technique

Describing the Technique

The technique consists in download a video of Youtube or Vimeo or whatever i movies site on the internet.

A example of Firefox add-on that can be used to download a video is DownloaHelper, and you can download it here:

If you use other browser is possibly have a version of DownloadHelper for it or you can use other solution. 

For this example was downloaded a CT scan video in the site the Virtual Pig Head.

A .MOV video was downloaded directly for this page, dispensing the use of DownloadHelper in this case.

OBS.: If you like dinosaurs or articles about CT scan, you cannot let to visit Witmer's lab site. It can be nteresting to found a got material for your research or pleasure.

Once a video was acquired it was seen that it have a lot of labels in the screen.

To erase it was used he video editor Kdenlive. The solution was to create some black areas over the bigger labels.

 So, a new video was generated without that labels. To convert this video in a image sequence you can use FFMPEG, a command line software that converts video in a serie of different formats.:

$ ffmpeg -i Video.mpeg -sameq sequencia/%04d.jpg


-i Video.mpeg is the input file.

-sameq preserves the same quality of the frame in the jpg output file.

sequence/%04d.jpg sequence is the directory where the file will be created and %04d.jpg means that the result files will be a sequence with four zeros like 0001.jpg, 0002.jpg, 0003.jpg.

Obs.: The signal $ only means that the command have to be written in a terminal. 

Ok, now you already have the jpg sequence, but InVesalius (the CT scan software) uses DICOM files to convert images in 3D meshes.

A DICOM file is not only a image file, but a imagem file with the data of pacient, distance of the slices, and etc.

So, to convert an image in a DICOM file you'll need a specific application called IMG2DCM, tha can be download here. With this command line application you'll convert image files like Tif, Png and Jpg in a sequence of .Dcm (DICOM) files and if necessary you can setup the information about the pacient, the distance of slices and etc. 

To do the conversion is quite easy:

$ python -i sequence_directory -o output_directory -t jpg

To open the DICOM files and convert it in a 3D mesh you can use InVesalius, that is a power opensource in area of CT scan.

How appears in the screenshot when the reconstruction is made the names os the muscles that was in the video was reconstructed too. It isn't a problem, because they will be deleted in the 3D editor after. 

We can importing the .STL file exported from InVesalius in Blender 3D.

The .STL files comes big and with a lot of subdivisions. You need to simplify it with Remesh, for example, to edit the mesh with tranquility.

Blender has a sculpt mode, where you can to polish the little warts that was created when had the texts with the name of the muscles.

Because the labels, the right ear came incomplete.

You can solve it mirroring the model and complete the lacking area.

This video shows a good technic to make this.

When the jpg sequence was converted in a DICOM sequence the data of distance of the slices wasn't setup. Because this the pig's face was generated stretched. After polished the warts and mirrored the ear we can rescale the face to correct the proportions (with a clean mesh).

Usually artists remodel a complex mesh with less subdivision using a technique called retopo.

But, this article is geared for scientific and archaeological solutions. So, the texturing of the model will be configured in the complex mesh, without make retopo.

The final step is rendering the images and make the animation, like you saw in the start of the article.

If you wanna, you can download the textured .OBJ file here.

Facial forensic reconstruction of a skull reconstructed of a video
The video used to reconstruct the mummy of the start of the article (and above) can be watched here.


1) A situation that gave a lot of pride for the writer of this article was the citation of Mr. Witmer in his Facebook page:

This is a good demonstration that people that like to share information generate big amount of solutions. The most important is the technique was describet and all have a chance to learn and make a solution better.

2) The original post  that motivated this article was written in portuguese:
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.