Tuesday, September 30, 2008

Yu: A Domain-Independent System for Sketch Recognition

Comments

Summary

In this paper the author talks about a domain independent system for sketch recognition. In this he talks about stroke approximation through direction graphs, curvature graphs and feature area of the strokes.

Here uses a different approach for vertex detection by first trying to classify a stroke into a primitive. If the stroke cannot be classified into a primitive he breaks the stroke from the point of highest curvature and recursively tries to classify the sub strokes.

For line segment approximation he tries to fit the gradient graph of the stroke to a horizontal and the line to a straight line from the endpoints. For circles the direction graphs should be constant and increasing so he tries to fit the gradient graph with a straight line.

For self intersecting strokes such as helix its not a good methodology to break the stroke from the point of highest curvature so here the author uses a different methodology. He breaks the stroke from the point of highest curvature as well as from the point of intersection and then tries to classify the sub strokes. The result from both sub strokes is obtained and analyzed which one to chose. Here the author follows the strategy of 'simpler is better'. That stroke classified as a circle is preferred over the sub stroke classified as a set of lines etc.

Author then uses some post processing to clean up the stroke for beautification and basic object recognition into square, circle and rectangles etc.

The author claims to have achieved an accuracy rate of 98% for polylines and 94% for arcs.

Discussion

The paper presents good ideas for domain independent object recognition. The author explains the process of basic object recognition quite vaguely about how he uses his algorithm for basic object recognition.

Although it shows some new techniques for stroke approximation but I would still prefer PaleoSketch over it because it explains in detail about each process and does some very similar work.

GLADDER: Combining Gesture and Geometric Sketch Recognition

Comments

Summary

This paper proposes a recognition system which tries to utilize the advantage of two types of sketch recognition systems 1) Gesture based system 2) Geometric based system.

Gesture based depends upon how the user is supposed to draw the sketches and has a good accuracy rate and Geometric based systems allows the user to draw more naturally but its difficult to describe the shapes using their geometric sub-parts.

GLADDER tries to merge both the recognition system to produce a higher accuracy rate. In its implementation it modifies the rubine algorithm to use a quadratic classifier instead of a geometric classifier for gesture recognition. It uses the LADDER system for geometric recognition. A recognition assistant must decide which recognition system to use. It uses the Mahanabolis distance to reject or accept the input for a recognition algorithm. For Rubine the mahanabolis rejection is done on 24 and for LADDER primitives is done on the valur of 100. A mid value of 30 is set to decide which system to use.

If the value is below 30 rubine algorithm is used for recognition and if the value is above 30 LADDER system is used for recognition.

With all the inputs the Modified Rubine has accuracy of 61.9% and LADDER has 75.2% and after merging the GLADDER has the highest accuracy of 79.9%.

Discussion

This system shows how two systems can be merged to produce a system which is better than the two by utilizing the best in both.

Kim: A curvature estimation for pen input segmentation in sketch-based modeling

Comments

Summary

In this paper the author discusses the techniques for the segmentation of the input through curvature estimation. The features discussed in this paper are direction at a point, support for curvature estimation at Point 'j' and local convexity at Point'j' with respect to P'i'.

Direction at a point for A,B and C is the change in angle formed by the line segments AB & BC. Curvature estimation at point j is the angle from the horizontal of the Line segment AB.
A polygon is locally convexity at Point j with respect to Point i, if the curvature estimation at point j and point i has the same sign.

For segmentation the author uses the local maximum of positive curvature and local minimum of negative curvature at the identified points. These points are then taken as the segmenting points.

The algorithm proposed by Kim produced an accuracy rate of 95% for power point basic shapes and some basic shapes used by other researchers for curvature finding.

Discussion

This paper gives out new feature for curvature estimation for input strokes. Similar features are used by other algorithms for curvature estimation this paper just gives a different approach to the same problem.

MergeCF : Eliminating false positives during corner finding by Merging Similar Segments

Comments

Summary

This paper actually discusses corner finding algorithm based on curvature and speed differences within a stroke. Once the corner are found from the algorithm tries to merge the smaller stroke segments with longer segments and if the fit for the segment is below a threshold that corner is removed between the two segments.

The segments after merging produces a low primitive error is used for merging and no merging is done if the error when merging the two segments is much higher that the sum of the original errors of the segments.

MergeCF has a high accuracy rate when compare to Sezgin and Kim.

Discussion

This algorithm is extension to corner finding algorithms from Sezgin and ShortStraw for arcs which rely on curvature and speed differences and then uses the top down approach for eliminating the false positives.

Thursday, September 18, 2008

PaleoSketch: Accurate Primitive Sketch Recognition and Beautification

Comments

Summary

In this paper the author discusses the techniques which aid in the sketch recognition and beautification of the sketch without hampering the user ability to draw freely and naturally. It adds no constraint on the user drawing which could help in the recognition process. In this paper the author tries to recognize some primitive set of strokes.
  1. Line
  2. Polyline
  3. Circle
  4. Ellipse
  5. Arc
  6. Curve
  7. Spiral
  8. Helix
The system has a structure which first takes the stroke into pre-recognition routine. In the pre-recognition routine a series of graphs and values are computed. Graphs calculated are speed graph, direction graph and curvature graphs. Then the corners are calculated for the stroke. In addition to these graphs some other features are also calculated. Normalized distance between direction extremes (NDDE) and direction change ratio (DCR) is calculated. Polylines will have lower NDDE values and higher DCR values and vice versa for curves.
Then a series of tests are performed for each shape and the author in detail explains the conditions which need to be satisfied for the recognition in his paper. One thing to note here is that the author successfully recognizes shapes which are not recognized by most recognizers such as Sezgin’s recognizer. These shapes include Arc, Spiral and Helix.
If all the test of the shapes fails then the input shape is termed as complex fit. The author here defines a novel hierarchy which helps in distinguishing the shape in complex interpretation or polyline interpretation. Each primitive shape has defined weight which is calculated based on the number of corners of the primitive shapes. The cumulative weights are calculated for both types of interpretations. The interpretations with the lowest weight are taken as the interpretation of the stroke (complex wins tie).

Results:
The author analyzed a dataset of 900 shapes with three version of his own recognizers and the Sezgin’s recognizer. The Paleo (proposed recognizer), Paeo-F (Paleo without NDDE DCR features), Paleo-R (Paleo without ranking algorithm) and SSD (Sezgin’s algorithm) were used. The results with Paleo were very good and achieved an accuracy of 99.89% for correct interpretation and 98.56% for top interpretation.

Discussion

The techniques discussed in this paper are a very in-depth analysis of the shapes, which accounts for the brilliant accuracy achieved by this recognizer. It does a great job by extending the work of the Sezgin and very effectively utilizes his techniques to introduce his own novel features.
I also particularly liked the ability of this low-level recognizer to be integrated into the high-level recognition system, LADDER.

Sketch Based Interfaces: Early Processing for Sketch Understanding

Comments

Daniel's blog

Summary
This paper describes about the algorithm which accroding to the author is a directed study for creating the pen input devices to be more usable in terms of the end user’s ability to interact with the system like he/she would do on a paper. This paper tries to define methods which can allow the user interaction more intuitive combined with the power of computing.
The author’s approach for the early processing of sketch is based on three phases approximation, beautification, and basic recognition.

Stroke Approximation:
It is to approximate the stoke with a more compact and abstract description, while both minimizing error and avoiding overfitting. The first step in stroke approximation is Vertex Detection. Vertex detection explains the methods used to find corners in a stroke. First a direction graph of the stroke is generated. From the direction curvature graph can be determined. The peaks in the curvature graph can pointed out as the vertexes or the corners of the stroke. The curvature graph has limitation is that it cannot properly identify the strokes which has a small curvature value so that it falls below the mean. To identify such corners the authors also presents the idea of speed graph. The speed graph algorithm works on the assumption that the user tends to slow down when it is drawing a corner. Using the speed graph alone has it’s own limitation. Poly lines formed from a short and long vertexes can be problematic. In such cases two corners can be regarded as on corner.
The author next presents the idea of Hybrid fit which uses both the above mentioned techniques to identify the best set of vertices.
The next problem in stroke approximation is the related to handling curves. The above mentioned techinques are good for polygons. The author approximate the curve regions with Bezier curves which are defined by two endpoints and two control points.

Beautification:
It refers to adjusting the sketch output to make it look as it was intended by the user. Here the author adjusts the slopes of line segments in order to ensure that were apparently meant to have the same slope end up being parallel.
Basic Object Recognition:
In this final step the author tries to recognize the basic objects that were built from line segments and curve segments. These simple geometric objects include ovals, circles, rectangles and squares.

Evaluation:
Overall the users of the system were very happy when they drew sketches on it. They could interact with the system very naturally. The recognition of the system for vertices and approximation of shapes with lines and curves was correct 96% of time.

Discussion
The presents some very good ideas in corner detection particularly the hybrid approach. But what I think that the paper lacks is the proper description of the ideas beautification and basic object recognition. After reading the paper it seems quite difficult to implement these with only the explanation in this paper.

Algorithms for the Reduction of the Number of Points Required to Represent a Digitized Line or Its Caricature

Comments

Summary


This paper proposes an algorithm for the poly line simplification. This can be useful in situations where your graphical application must draw many poly lines and time becomes an issue, like cartographic applications. Although this is a line simplification algorithm but it can be usefull in finding the corner in a particular sketch.

The algorithm works by first finding the two end points of sketch, which is the starting and and ending point of the sketch. Then it tries to find the point which is most distant in terms of the euclidean distance from the two points. If the euclidean distance is above the threshold value the algorithm assumes that this is not a line and there are possible corners on the line. The farthest point away becomes the corner and also the ending point for the starting points and starting point for the ending point.

The process is repeated recursively until no points can be found above the threshold. The algorithm works well and finds corners with good accuracy on polygons. While working with curves the algorithm sometimes calculate two points for a single point.

Discussion

The algorithm is the well defined and seems to work well for polygons. It's a very basic algorithm and several improvements can be made on it to produce better results with curves and arcs.

Monday, September 15, 2008

ShortStraw: A Simple and Effective Corner Finder for Polylines

Comments

Daniel's blog

Summary

In this paper the author discusses a simple corner finding algorithm which he compares to more complex corner finding algorithms given by Sezgin and Kim Kim. The basic theme defined for this algorithm is that it can be implemented easily and still be very accurate when compared to it's counterparts. When implementing this algorithm the developer only has to have a very basic knowledge of higher mathematical function defined in calculus.

The algorithm works in three stages
  1. Resampling of input points of a stroke.
  2. Calculate 'straws' which is the distance between the endpoints of window.
  3. Points with minimum 'straw' distances as the corners of the stroke.
In resampling the point from the original stroke a resampled in such a way that the resulting stroke has equidistant points. Its based on the Wobbrock resampling algorithm.

In corner finding the author takes two approaches. First is the bottom-up approach in which the corner are indentified by a defined algoritym. Secondly the author takes a top-down approach in which the corner which are not identified by the first algorithm are identified and the corners which are misrecognized earlier are removed.
The bottom-up approach of corner finding algorithm works by first calculating straw values at each point. A straw value is the euclidean distance between the two points pi-w and pi+w where w is the window size. The algorithm works on the fact that these straw values will become samller when points will get closer to a corner. The corners are identified by first taking the median of all the straw values. Then multiplying the median with a constant value to calculate the threshold t. The local minima below the threshold t is taken as corner.
The top-down approach of the corner finding algorithm works to find the missed corners and remove false positives. This is done by calculating the ratio between the euclidean distance to the path distance between the two points taken as corners. If the ratio is below a defined threshold value then the two points are considered as lines and there are no corners between the two points. If the value is larger than the threshold then there could be more corners between the tow points. The threshold value is relaxed and the minimum straw value which is in the middle half of the stroke is taken as a corner. A collinear check is then run on the subsets of triplet, consecutive corners and the middle point is removed if the three points are collinear.

It is interesting to note that the ShortStraw with it's simplicity is able to produce better results than Sezgin and Kim Kim with this logic. It also doesnot uses any temporal information which as Sezgin and Kim Kim so it can also be used for sketches taken from offline sources.


Discussion
I like the idea of the ShortStraw which uses a very intuitive algorithm for corner finding. The good thing about ShortStraw is that uses no temporal information which more close to how the human perceives corners from sketches.

The algorithm uses a lot of threshold values and constants which tells that ShortStraw can only be fine tuned for a perticular set of strokes and not a blanket solution for corner finding. I think this is where the contextual information about the stroke can make a differenece in the accuracy and usability of this algorithm for a wide range of problems.

Prototype Pruning by Feature Extraction for Handwritten Mathematical Symbol Recognition

Comments

Yuxiang's blog

Summary

In this paper the author discusses the problem of recognizing math symbols. The problem is tough because there are around 1000 to 2000 mathematics symbols today. Mathematics writing is a blend of drawing and writing. In this paper the author defines some features related to mathematics symbols, gives algorithms to extract those and use these features to recognize the symbols.
In preprocessing the collected data the author describes some techniques he used. Techniques used are chopping head & tail, re-sampling, smoothing and size normalization. Author also identifies some features which he categorized in these broad categories.
  1. Geometric features
  2. Ink related features
  3. Directional features
  4. Global features
The recognition method he used was elastic matching, which is to determine the minimum distance between the unknown symbols and a set of models.

Discussion

This paper discusses some new set of features which might feel more relevant to the mathematics symbols but are also good for other set of sketches

Wednesday, September 10, 2008

Graphical Input through Machine Recognition of Sketches

Comments

Manoj's blog

Summary


In this paper the author first tries to answer the question “Could a machine makes useful interpretations of a sketch without employing knowledge of the subject domain?” He tries to answer this question by means of a system called “HUNCH”. HUNCH is a set of FORTRAN programs and has several components. One part of the program is called STRAIT which found corners in a sketch as function of speed. Curves are considered as a special case of corners. When the curvature of the corner is too gradual or the curve is drawn too carefully the output of the straightening program would go through the curve-fitting program. CURVIT would make one or more passes over the raw data at places pointed out by STRAIT. It was seen that when the output of the STRAIT and CURVIT was shown to the participants; the interpretations made by these programs were not always what the participants expected.

Latching is the idea of joining two or more lines if the sketcher was not able to join them. This also suffered from problems if the domain knowledge was not available. Like in 3D shapes and pictures with varying scales.

Overtracing is idea of replacing several close line with one line by inferring what the user had intended to draw. This also suffered from problems same as of latching for 3D pictures.

The question posed earlier by the author “That is there a syntax of sketching independent of the semantics?” - Is still unresolved. It’s evident from the above scenarios that sketch recognition involves drawing of a sketch and also the context explaining the domain the sketch belongs.

Towards an interactive system: The author here explains an interactive system of sketching where the user can draw in a unobtrusive manner. The system maintains the user inputs and its interpretation in the form of a database. The HUNCH components than can use the database. The HUNCH has three kinds of components. 1) Inference programs: which are improved versions STRAIN, LATCH, OVERTRACE and GUESS. 2) Display programs: which allow displaying any levels of database. 3) Manipulation programs: which allow the user to modify the database directly. In order for the system to be interactive STRAIN works in real time and finds lines and curves on the fly.

In conclusion the author says that the sketch recognition problem has come in a full circle from a insistence on machine recognition with no demands on the user, through knowledge-based systems, and back to more modest interactive approach.

Discussion

The author brings up the idea and complexities involved in sketch recognition and beautification techniques. The author is very right to say that sketch recognition is not possible without the knowledge of context in which the sketch is drawn.

I think when in the end the author goes back to the interactive approach is basically his disappointment of not able to achieve the desired results or some solution other than knowledge based system. I think sketch recognition with the knowledge based systems is a perfect model to deal with this problem.

User Sketches: A Quick, Inexpensive, and Effective way to Elicit More Reflective User Feedback

Comments

Daniel's blog

Summary


In this paper the author gives a new idea about the prototype design in comparison to other more commonly used methods for usability testing. The author here focuses on the making the right design instead of making the design right. In usability testing (UT) the participants usually generate more reactive comments and criticisms that reflective design suggestions or redesign proposals. The author here introduces a new technique called the user sketching.
The reflective methodology of the user sketching gives the idea of users sketching the design of the system after they are given ideas on the possible designs of the system. Here the author conducts the experiment by taking four groups of 12 people each group is shown with different prototypes of a house climate control system. There were three types of prototypes. 1) Circular prototype. 2) Tabular prototype. 3) Linear prototype. The last group is shown all three prototypes. When verbally asked for user feedback on the design the participants gave more comments than suggestions for all prototypes. The participants were then asked to draw the design of in their view would be the ideal interface of the system. A ‘Quick and Dirty’ analysis showed that the user designs were stereotyped to the designs they were showed earlier, but the important thing was that some users even came up with the new ideas which were not part of the prototypes they were shown earlier.

The author here classifies his subjects into three categories. 1) The Quiet Sketcher: The participant who highly rated the prototype and when asked about the change suggestions he said ‘No’ immediately. When asked to sketch to draw, he drew a design which included totally new features. 2) The Passive Sketcher: She also highly rated the prototype but when asked for changes couldn’t figure out what she would change. When asked to draw she discovered a totally new solution for representing intervals in the system. 3) The Overly Excited Sketcher: She was really excited to contribute to the study but had confused and mixed suggestions for the system. When asked to draw she drew a totally different interface which even changed the shape of the device.

Here the author illustrated the benefits of engaging users in a sketching activity as an extension to the conventional usability testing. The act of sketching proved to facilitate reflection and discovery better than the other methods used.

Discussion

The idea of the author here is good and does work out well because the participants are able convey to the designer the actual interface they are looking for. I think this method can work out well for the devices and systems which are existent today. The design features the participants gave in these experiments were not novel; instead they were taken from some other devices and artifacts around them.

Wednesday, September 3, 2008

Gestures without Libraries, Tookits or Training

Comments

Manoj's blog

Summary


This paper discusses another gesture recognition algorithm which according to the author is simple to implement. The author focused on these 3 points when designing this algorithm which he name as $1. 1) To present easy to implement algorithm 2) To compare $1 with other advanced algorithms to show that $1 is as good as them for certain shapes. 3) To give insight which user interface gesture are best in terms of human and recognizer performance.

A Simple Four-Step Algorithm:

Step1: Resample the point path Here the author explains that two same sketches drawn at different will have different number of input points on it. Sketch drawn slower will have more input points. The that goal of this step is to resample the gestures such that the path defined by their original points M is define by N equidistantly space points. The value of N=64 was found to be very reasonable for implementation.
Step2: Rotate once based on indication angle The goal of this step is to rotate both the sketches so they are best aligned for future steps. Here the author gives the concept of indicative angle which is the angle formed between the centroid of the gesture and gesture's firs point. Both gestures are rotated such that the indicative angle of both the gestures are 0.
Step3: Scale and translate here the gesture is first scaled to a reference square. This scaling is non-uniform. Then the gesture is then translated to a reference point. For simplicity the author keeps the reference point as origin.
Step5: Find the optimal angle for best score Here the author explains the computation of a value which can then be used for the recognition of the gesture. The candidate gesture is compared to each template gesture to find the average distance between the corresponding points. The lowest value of the path distance will lead to the recognition of the gesture.

Limitations
  • $1 cannot distinguish between gestures whose identities depend on specific orientations, aspect ratio, or locations.
  • $1 does not use time, so gestures cannot be differentiated on the basis of speed.
Evaluation results
$1 performed very well for user interface gestures with 99% accuracy overall. With one template is showed 97% accuracy. With 3 loaded templates it showed 99.5% accuracy. Rubine on the other hand was performed at 95% accuracy using 9 training examples of 16 gestures.

Discussion

I liked the way the authors projected their algorithm as '$1' which immediately tells its a fast easy to implement solution for gesture recognition. The algorithm is very interesting and the author very rightly states it's limitations.

I don't see any real world application of this algorithm. It's a good read for people like me who are looking towards getting started in this field and implement something.

MARQS

Comments

Yuxiang's blog

Summary


This paper discusses an algorithm that can identify multiple stroke sketches using a set of global features that are both domain and style independent. To give a real world example the author have created an application called MARQS. In this application the user can store photo and music albums, which can be retrieved by matching multi-stroke sketches which the user designated during album creation.
Recognition algorithm uses two different classifiers depending upon the the number of training examples available. Initially the user is asked to give only one example sketch and whenever the user performs a search the sketch he gave he gave for searching is also added to examples. The algorithm uses global features for the sketches since it puts no constraint on the users on how they would draw the sketches. Currently it uses four global features to describe a sketch. 1) Bounding box aspect ratio: the total width of the sketch divided by the total height of the sketch. 2) Pixel density: the ratio of the filled (black) pixels to total pixels within the sketch’s bounding box. 3) Average curvature: the sum of the curvature values of all the points in all strokes divided by the total sketch length (sum of the stroke lengths of all strokes in the sketch). 4) The number of perceived corners across all strokes of the sketch.
MARQS is real world application which utilizes the the recognition algorithm mentioned above. It’s a media storage and retrieval query sketch system. It allows users to create, edit, open, add and delete albums and pictures. It also allows user to search for an album through a sketch and gives the top 4 sketches that matched.
To gather the preliminary data 1350 different sketch queries were performed (15 sketches, 9 queries each, 10 tests). The system used the single classifier 27% of the time and linear classifier 73% of the time. 70% of the time the system produced top result and 98% in the top 4. 2% of the time the results were not in top 4.

Discussion
Here I liked the idea of the system becoming more accurate with every search performed. The idea of adding the query sketch to example space is good. But I am not sure that its a good enough algorithm for other real world application like drawing a circuit diagram.
In MARQS the application shows top 4 results and then the person chooses one from it which tells the system to associate that query sketch to a particular example class. By using MARQS the person is training the system without actually knowing that he is training the system.

I also think 70% accuracy for top result will not be very effective in real world applications. Nonetheless this system opens a new dimension of multi-stroke recognition

Monday, September 1, 2008

Visual Similarity of Pen Gestures

Comments

Daniel's blog

Summary


In this paper the author discusses the issues of designing good gestures so that they are easy to remember for the user of the system. The author is trying to develop a tool which will enable the UI designers to improve their gesture set so they are easy to remember and use.

The author investigates the gesture similarity and develops a computable, quantitative model of gesture similarity which will help in creating a gesture designer tool. The author conducts two experiments on the human beings to develop a computable, quantitative model of gesture similarity.

Perceptual similarity is the concept of how human beings perceive two shapes to be similar to each other. Psychologists have conducted investigations on the shapes which are simpler than gestures. Investigations conducted by Attneave found that the similarity of shape correlated with log of the area and tilt for parallelograms.

Multi-dimensional scaling is the technique for reducing the number of dimensions of the data set so that patterns can be more easily seen by viewing a plot of data, in two or three dimensions. Author here uses the MDS version called INDSCAL, that takes as input a proximity matrix of each participant and takes individual differences into account.

In experiment one the author makes a set of 14 gestures which are a widely dissimilar to each other. Each of the twenty participants are shown a set of 3 gestures on the screen and are asked to select the gesture which is mostly dissimilar to the the other two gestures. All possible combination are shown to the participants i.e. a total of 364 screens are shown. After the experiment the author had to analyze two important points from the data collected. 1) to determine what geometric properties of gesture influenced their perceived similarity. 2) to produce a model of gesture similarity, that given two gesture the system could predict the similarity that humans would perceive. The firs point was addressed through MDS plotting. The Euclidean inter-gesture distances corresponded to inter-gesture dissimilarities. The second point was addressed by running the regression analysis to determine which geometric features correlated with reported similarity. Some features were taken from Rubine’s algorithm and some inspired from MDS analysis. The author was able to derive a model which correlated 0.74 with the reported similarities.

In experiment 2 the the author wanted to explore how systematically varying different features would affect the perceived similarity. For this the author made 3 gesture sets of 9 gestures each. First set was to explore total absolute angle and aspect. Second was to explore length and area. Third was to explore rotation-related features. The author then took 2 from each set of gestures and made a fourth set of gestures. Again twenty people were shown the a set of 3 gestures and a total of 538 gesture sets were shown same as in experiment one. The trial was also examined using the techniques used in experiment one. Author was able to determine that length and area are not very significant contributors to similarity judgment. Another finding was that the perceived similarity among gesture is not proportional to the angle rotation of the gesture, instead gestures with horizontal and vertical lines are perceived more similar than those gestures whose components are diagonal.

Author concludes that human perception of similarity is very complicated and there are several cues involved in human perception which determines the similarity and dissimilarity of the gestures. However the authors model correlates 0.74 with the perceived similarity in experiment one which is a fairly good model.

Discussion

The author have conducted an extensive investigation to determine a model for perceived similarity of gestures. Even if we can determine that two gesture are similar to each we still cannot make a gesture set that is easier to remember. Remembrance of a gesture not only depends upon a gesture being dissimilar to another gesture so the user does not overlap the gestures in memory but also on the shapes and actions mapped to a gesture, the complexity of the gesture itself and also similar gestures for similar meanings will also contribute in the remembrance of the gestures.