热电堆红外温度传感器

所需积分/C币:4 2018-08-19 00:13:03 421KB PDF

红外温度传感器基本原理解析,包括热电堆红外传感器的介绍。
680 Luis Ignacio Lopera Gonzalez et al./ Procedia Computer Science 19(2013)678-685 3.1. Thermopile sensor Thermopile sensors are capable of measuring the thermal radiation absorbed on their active area. They belong to the category of thermal detectors, which generate a small thermoelectric voltage proportional to the detected radiation. Their operation principle is based on the Seebeck effect. The Seebeck effect de scribes the electric current in a closed circuit composed of two dissimilar materials when their junctions are maintained at different temperatures [9, 10]. When several thermopile sensors are arranged in a matrix,a scene image can be constructed from the heat radiation. These temperature differences between the sensing elements(pixels), can be interpreted as objects by measuring how the pixels values differ from the ambient temperature. In this work, we considered the Panasonic GridEye sensor [8], which is an array of 64 ther- mopile sensors arranged in a 8x8 matrix. Example images and processing steps performed based on the thermal images obtained from the sensor are detailed in the following section 3. 2. Processing architecture The proposed architecture for detecting fine-grained interactions between people and objects of interest n a scene consists of three main modules: sensor layer, object detection layer, and classification layer. See Figure l for a diagram of the architecture Fixed objects irid eye Occ Obje Sensor Correction Detection Tracking i maker: State Classifier Sensor Layer Object detection layer Classification layer Fig. 1: Detailed diagram of the processing architecture to process thermopile sensor images in this work Please refer to the main text for more details regarding the functionalities inside each block Sensor layer. The sensor layer uses raw data from the thermopile sensor matrix and applies a Brown's lens correction(see Eg. 1)to fix the barrel distortion due to the sensors construction. Here, re and ru are corrected and uncorrected distances of pixels with respect to the optical axis. Kn are radial distortion coefficients, here K1=74×10-3andK2=0.17×103 are used +Kir+K (1) Tc= Cau(tu- Tamb)+ Tamb and C 4d2 tan e To compensate for the sensor mounting angle, an area correction was applied as described by eq. 2 Here, Tc and Tu are corrected and uncorrected temperature of a pixel respectively, Tamb is the ambient temperature, Au is the uncorrected projected area of a pixel and C is the area normalization constant. These corrections provide a grid where features appearances are independent of their location in the matrix. The parameters Kn and au where fitted according to [8] Luis ignacio Lopera Gonzalez et al. /Procedia Computer Science 19(2013)678-685 681 Object detection layer. By taking the corrected matrix from the sensor layer, an occupancy map was derived by assigning the most probable state given the pixel's temperature difference with the ambient temperature The states can be one of empty, cold or hot occupant. The resulting occupied pixels are then grouped b searching the surrounding pixels for the ones in the same state. This approach does not allow for an object to be partially hot and partially cold with respect to ambient temperature. If adjacent pixels present hot-cold behaviour, separate objects will be detected Subsequently, information about the scene context was added. We used here prior knowledge on the stationary objects located in the sensors field-of-view. Such objects might not be visible by the sensor, e. g objects that are overshadowed or at room temperature. In Figure 2(c)it can be seen how the inclusion of the prior object location know ledge allows to split object blobs into two separate objects. Also, this process allowed us to make a first classification: objects created from intersecting blobs and considering prior object location knowledge are considered static objects, while the remaining ones are considered dynamic objects The final step in the object detection layer was to keep track of each object across frames. This is needed to keep a consistent history of each object as required by the following steps (c Fig. 2: Output object detection layer for a scene with one dynamic and four static objects; (a) lens cor rected sensor output, (b) resulting occupancy map with occupied pixels outlined in green, (c)detected ob- jects(green) and fixed objects(blue),(d) resulting labelled objects Classification layer. We arranged all objects identified in an image scene into possible pairs as follows (1) static object paired with dynamic objects, and(2) pairs of two dynamic objects. For each pair, a refer ence object was selected. For static-dynamic pairs, the static object was always used as reference. Object processing queues were then created per object pair and results grouped according to the reference object As a result, the classification layer will provide the current activity for each reference object an activity is defined as the state an object or the interaction this object with another one in the scene We classified the reference object state and its interaction with the other object in the couple. State and interaction results were then fed into a state filter to remove unlikely or impossible states given the sequence in which activities have occurred At this point an activity interest list was employed. This list contained all possible interactions with the reference object ordered by their relevance. The current activity for an reference object was determined b selecting from all the detected interactions the one ranking higher in the list As a final step, a temporal filtering was applied, equivalent to a low pass filter, to remove short transient tates that last very short time periods only. 682 Luis Ignacio Lopera Gonzalez et al./ Procedia Computer Science 19(2013)678-685 Object activity Scripted dataset Real-life dataset Coffee pot State Interaction Present Faucet tate H old Interaction Away 8 913 Present 305 ave State 10 On 9 Interaction Away Prese Refrigerator State Closed 6 6 Interactio A Pr Interacting ople Interaction Meetin Table 1: Overview on objects, interactions, and occurrence instances in scripted and real-life datasets 4. Study Methodology and Implementation Details This section describes the study methodology and implementation details of the recordings with the thermopile sensor Test installation and data recording. The sensor was placed in an office building pantry area, overseeing several static objects and dynamic objects. In particular, the pantry area contained a faucet with hot and cold water, a coffee pot, and a microwave. In this space several activity scenarios are regularly performed where static objects interact with dynamic objects. As an example, a person (dynamic object)uses the faucet(static object), or two persons are talking(dynamic objects ). Table I shows the states and interactions considered for each static object, and the interaction considered between dynamic objects. It also lists the activity occurrences in both scripted and real-life dataset. After classifier training using the scripted dataset, a validation was performed using recordings of (4.9Hours) on a regular working day. All recordings were performed in the pantry area. Ground truth for the activity dataset was obtained using manual annotations from a video recorded at l fps. The annotations were up-sampled after the recordings to match the sensor data rate (a) Thermopile sensor used: Panasonic GridEye (b)Office pantry area used for evaluation Fig 3: Illustrations of the thermopile sensor and placement in the studies. The sensor was placed at the celling, capturing the table corners, microwave, and counter with the faucet, refrigerator, and the coffee pot Object and interaction classifiers. Classifiers were used to determine state and interaction classes. Table 3 presents the features selected and used for classifications per pair of objects. For state classifiers, onl features for the reference object were considered. For interaction classifier, the complete set was used Since multiple dynamic objects could exist in a scene at any given time, the interaction classifiers ran for all object pairs containing the same static object as reference. In the pantry area and corresponding to the Luis ignacio Lopera Gonzalez et al. /Procedia Computer Science 19(2013)678-685 683 Reference object Coffee pot Faucet Microwave Refrigerato People Index Activity Index Activity Index Activity Index Activity Index Activity Away Away Single Present Present Present Present Ing Serving Interacting Table 2: Activity interest list per reference object as used in our evaluation. The lower the index, the higher the interest (relevance) for the recognition. The interest can be adjusted according to application needs Reference objects Paired objects Index Feature Index Feature Area temperature product 2. Area temperature product 3. Temperature variance 4 Temperature variance 5 Area 6. Area 7. Distance to object I Gradient x direction Gradient x direction 9. Gradient Y direction 13. Gradient Y direction 10. Gradient magnitude 14. Gradient magnitude 11. Gradient phase 5. Gradient phase emperature perature 18. Position variance Table 3: Feature set considered for the object and interaction classification. The list is structured into features for state classifiers(Reference objects)and for interaction classifiers(reference paired objects) number of objects, a total of eight classifiers were used. We used support vector machines (SVM), where parameters had been tuned with a grid search method as suggested by [ll] on training data The state filtering was implemented using HMMs and fitted on the training dataset using hmm-estimate from Matlab. The function calculates the maximum likelihood estimate for transition and emission probabil ities given the sequence and known states extracted from the training set. Subsequently the activity interest list refinement was applied Table 2 shows the activity interest lists defined per reference object. For exam- ple, if there are four dynamic objects, and the Coffee Pot as static object, then the classified interactions are [Away, Away, Serving, Present]. After applying the activity interest list, the recognised result is Serving Evaluation procedure. To evaluate our approach, we initially determined the relevance of features pre sented in Tab. 3 for both classifiers. We used a variation of the approach presented in [12] to determine relevance. Instead of using the complete feature set jointly, each feature was evaluated individually since high degree of correlation could be expected. In result, this yielded a much smaller feature vector size Subsequently, accuracy performance measurements were obtained using the real-life dataset R esults The feature relevance analysis results are shown in Figure 4 for state and interaction classifiers. As the diagrams indicate, the six best features were sufficient to achieve high accuracy for state classifiers. For interaction classifiers, eight features were needed to obtain high accuracy Choosing additional features did not improve performance for any of the two classifier groups Figure 5 shows an example of the grid search results obtained for svn parameters C and o. Here,a smooth surface near the rbf kernel center with small o could be observed This indicates that the svm should perform well with the test data, which is confirmed by the results shown in Table 4. Similar plots where obtained for the other seven classifiers Table 4 summarizes classifier modelling performances on the training data. As the result shows, the classifiers can sufficiently model most cases, except for the microwave state classifier. We observed that the microwave states did not result in sufficient temperature difference between On and Of. Rather, a sequence of interaction events involving the microwave could provide sufficient discriminatory power. This issue 684 Luis ignacio Lopera gonzalez et al. /Procedia Computer Science 19(2013)678-685 ,--丹一一-界-- H Coffee pot 一 Faucet D.6 b Coffee pot -V-.Microwave 日- Faucet △·, Refrigerator 一☆ Meeting Feature vector size Feature vector size (a) Fig 4: Accuracy vs. feature vector size for(a) state classifiers,(b)interaction classifiers. The analysis was performed in a 5-fold cross validation using the training dataset 0.92 0.998 0.91 0.996 0.9 0.994 0.89 0 0.992 0.99 0.870 0 5 log2(o) Fig 5: SVM parameter grid search for Coffee Pot classifiers:(a)state classifier, (b) interaction classifier becomes more pronounced for the real-life dataset, shown in Table 5. As people will simply stand in front of the microwave, e.g. to read the billboard the object state cannot be reliably determined 6. Discussion and conclusion The 2D-matrix thermopile sensor provides information for detecting complex activities, like serving cof fee in a pantry area. Our test scenario showed that the matrix configuration simplified monitoring relatively complex areas. While our approach required to obtain a map of static objects, there was no need to carefully measure overlap of each thermal device with the sensor's field of view. This issue was mentioned by Wren and Tapia [3] as limiting factor for classifying activities using ambient sensors. Although not all interac- tions described in [3] where tested in this work, e.g. for meetings(corresponding to split and join activities) similar performances were achieved in our approach. Nevertheless, our method required a simpler sensor tallation The hight at which a sensor is placed determines the tradeoff between coverage and resolution of the scene image. The tradeoff can be observed in Tabs. 5 and 4, where some good performance ratings obtained during training did not hold for validation. We consider that the performance reduction was due to proximity Classifier Coffee pot Faucet Microwave Refrigerator Meeting S 1.00 0.70 0.56 1.00 Interaction 0.82 0.97 0. 0.73 0.85 Table 4: Normalized average training set accuracies per classifier. The result shows sufficient modelling capability of the approach for most state and interaction classifiers Luis ignacio Lopera Gonzalez et al. /Procedia Computer Science 19(2013)678-685 685 Object Ivity Accura Coffee pot State Off NaN On 100,00% Present Serving 96,43% F St Off 80.00% Cold /Hot 45,45% Microwave State Of34,48 Refrigerator State Closed 66,67% O 0,00% Interaction Away /Present 24, 44%0 11,36% People Interaction 65,52% Meeting 60,71% Table 5: Overview on recognition performance using the real-life dataset for testing classifiers. In the real life dataset, not all activities were performed, shown here as combined states for some classes and size of the areas of interest. For example, the normal use of the faucet, makes people invade the refrigerator area, resulting in erroneous class responses Thermopile sensors allow us to recognize multiple activities with one sensing device in places where multiple sensor modalities would have been required. It can be expected that every recognized activity can be mapped to an energy cost, which in turn, can be fed back to the user to guide energy consumption awareness. This energy consumption feedback can be given instantaneously when the activity is being performed. Although our test showed that the sensor can be used to identify complex activities in a scene, the processing was done offline. In further work, the processing architecture could be implemented in the thermopile sensor device 7. Acknowledgments This work was kindly supported by the Eu FP7 project GreenerBuildings, contract no. 258888, and the Netherlands Organisation for Scientific Research(NWO) project EnSO, contract no. 647.000.004 8. References [1] N. Oliver, A Garg, E. Horvitz, Layered representations for learning and inferring office activity from multiple sensory channels, Comput Vis Image Und 96(2)(2004)163-180, special issue on event detection in video. doi: 10.1016/j. cviu 2004.02.004 [2]D. Kawanaka, T. Okatani, K. Deguchi, Hhmm based recognition of human activity, IEICE Transactions on Information and Systems E89-D(7)(2006)2180-2185.doi:10.1093/ etis/e89-d.7.2180 [3] C. R. Wren, E. M. Tapia, Toward scalable activity recognition for sensor networks, in: LOCA 2006: Proceedings of the 4th International Symposium on Location, Vol. 3987 of Lecture Notes in Computer Science, Springer, 2006, pp. 168-185 doi:10.1.1.65.6272. 2012: Proceedings of the 1Oth IEEE/IFIP International Conference on Embedded and Ubiquitous Computing, a ents, in: EUC [4] F. Wahl, M. Milenkovic, O. Amft, A distributed pir-based approach for estimating people count in office environm [5] K. Murao, T. Terada, A. Yano, R Matsukura, Detecting room-to-room movement by passive infrared sensors in home envi ronments, in: Aware Cast 2012: Workshop on Recent Advances in Behavior Prediction and Pro-active Pervasive Computing, 2012. [6]E M. Tapia, S.S. Intille, K Larson, Activity recognition in the home using simple and ubiquitous sensors, in: Pervasive, 2004, pp.l58-175. [7 D. Ramanan, D. A. Forsyth, A. Zisserman, Strike a pose: Tracking people by finding stylized poses, in: In CVPR, 2005, pp 271-278 [8 Panasonic Electric Works Corporation, Infrared array sensor: Grid-eye. June 2012) Url Http: //pewa. panasonic com/assets/pcsd/catalog/grid-eye-catalog. pdf [9] C. Kittel, Thermal physics, W.H. Freeman, San Francisco, 1980 [10] J. Fraden, Handbook of modern sensors physics, designs, and applications, AIP Press/Springer, New York, 2004 [11 C.-w. Hsu, C.-C. Chang, C.j. Lin, A practical guide to support vector classification, Bioinformatics 1(1)(2010)1-16 [12 L. Hermes, J Buhmann, Feature selection for support vector machines, in: Pattern Recognition, 2000. Proceedings. 15th Inter national Conference on, VoL 2, 2000, pp 712-715 voL 2. do: 10. 1109/CPR. 2000.906174

...展开详情
img
xieyi21

关注 私信 TA的资源

上传资源赚积分,得勋章