Journal influence
Bookmark
Next issue
Cognitive measurements: the future of intelligent systems
The article was published in issue no. № 4, 2013 [ pp. 7482 ]Abstract:This paper considers next generation intelligent systems based on measurements and knowledge discovery from sensor data. A comparison between measurement and expert estimation is made. Some sources and types of measurement uncertainty are elicited. Nonclassical measurement concepts such as distributed measurements, intelligent measurements, soft measurements are discussed. Some fundamentals of granular measurements theory are formulated. A new concept of cognitive measurements as a hierarchical information granulation process based on the principle of «MeasurementPragmatic Estimation» unity is proposed. A twoleveled architecture of cognitive measurements is considered. A logicalalgebraic approach to data interpretation and analysis on the top level of cognitive measurements is developed. To implement cognitive measurements the notion of cognitive sensors equipped with granular pragmatics is introduced by taking multivalued and fuzzy logics together with Peirce’s pragmatic maxima. Granular interpretations of multisensor data fusion are proposed. Both crisp and fuzzy logical pragmatics for individual sensors and their dyads are constructed using product lattices and bilattices.
Аннотация:This paper considers next generation intelligent systems based on measurements and knowledge discovery from sensor data. A comparison between measurement and expert estimation is made. Some sources and types of measurement uncertainty are elicited. Nonclassical measurement concepts such as distributed measurements, intelligent measurements, soft measurements are discussed. Some fundamentals of granular measurements theory are formulated. A new concept of cognitive measurements as a hierarchical information granulation process based on the principle of «MeasurementPragmatic Estimation» unity is proposed. A twoleveled architecture of cognitive measurements is considered. A logicalalgebraic approach to data interpretation and analysis on the top level of cognitive measurements is developed. To implement cognitive measurements the notion of cognitive sensors equipped with granular pragmatics is introduced by taking multivalued and fuzzy logics together with Peirce’s pragmatic maxima. Granular interpretations of multisensor data fusion are proposed. Both crisp and fuzzy logical pragmatics for individual sensors and their dyads are constructed using product lattices and bilattices.
Authors: (maria.svyatkina@gmail.com)  , Russia, Ph.D, (tarasov@rk9.bmstu.ru)  , Russia  
Keywords: computing with words, cognition, measurement, measurement uncertainty, measurand, cognitive measurement, granule, information granulation, granular measurement, pragmatics, multivalued logics, fuzzy set, bilattice, intelligent system 

Page views: 8041 
Print version Full issue in PDF (7.95Mb) Download the cover in PDF (1.45Мб) 
Nowadays the extension of existing architectures for intelligent systems and development of hybrid intelligent technologies by implementing cognitive processes and mechatronic (or robotic) modules has become a key feature of modern AI applications. Good examples are intelligent robots [1–3] and ambient intelligence (AmI) systems [4–6]. In [2] V. Finn considers intelligent robot as a cognitive system able to make decisions and perform actions. So a general structure of mobile intelligent robot may be given as follows: Intelligent Robot = Cognitive Subsystem + Intelligent Subsystem + Action Subsystem + Mobility Subsystem. Similarly an ambient intelligence system can be represented as cognitive metaagent [6]. Basic cognitive processes are focused on knowing system’s environment: these are its sensing, perception, representation, thinking, learning, understanding. The operation of cognitive system is based on transitions «Data–Information–Knowledge–Wisdom» [7] (or in terms of M. Zeleny «Nothing–Knowledge, What–Knowledge, How–Knowledge and WhyKnowledge» [8]); it provides decisionmaking processes with an adequate information environment. To realize these transitions in artificial cognitive systems we have to synthesize multileveled understanding mechanisms: a) understanding links between objects or events; b) understanding behavioral norms and standards; c) understanding situation as a whole (see Figure 1). In [9] D.A. Pospelov introduced 7 understanding levels for intelligent interfaces. In developing new generation intelligent systems we ought to take into consideration the following special features of cognitive processes: 1) cognition is an open system that is based both on available knowledge and measured current data; 2) an intrinsic property of cognition is information granulation – a purposeful formation of granules of various sizes. Measurement and granulation play a leading part in cognition. In this context we recall a famous saying by D. Mendeleev: «Science begins where measurements are started». First of all, we are interested in transitions from measurement and estimation to knowledge. Here we are guided by Zadeh’s thesis which says that a good deal of information in intelligent systems may be divided into two groups: a) factual information which is numerical and measurementbased; b) perceptionbased information which is expressed in a linguistic form [10]. In this paper we consider measurement as both knowledgeforming and knowledgebased process. So the interactions between intelligent systems and measurements are bilateral. On the one hand, sensorbased information serves as a natural source for constructing an internal model of external world. Here a special data mining and knowledge discovery processor is of primary concern (Figure 2). On the other hand, measurements themselves become intelligent, if metrological knowledge is used, measurement expert systems or special measurement agents are constructed. A new concept of cognitive measurements based on granules and granular structures is suggested in this paper. Some granular pragmatics for individual sensors and theirs dyads are introduced. On the Way to Cognitive Measurements Measurement vs Expert Estimation. Some foundations of classical theory of measurement were suggested by H. Helmholtz in the late 19th century. The term «Measurement Theory» itself was used by J. Pfanzagl in 1968 [11], although in 1950’s the first formal variant of such a theory was proposed by D. Krantz, P. Suppes et al. under the name of representational theory of measurement (see [12]). This theory has provided an algebraic approach to measurement using such concepts as sets, relations, mappings, empirical and numerical relation systems, scales, morphisms, etc. The objective of a measurement is to determine the value of the measurand that is, the value of the particular quantity to be measured [13]. A measurement begins with an appropriate specification of the measurand, the method of measurement and the measurement procedure. In practice, the required specification of the measurand is given by a needed accuracy of measurement. By classical measurement we mean an experimental specification of physical value by means of technical devices and instruments. This definition contains the following main attributes of classical measurements: 1) only physical entities (properties of real objects or processes) may be measured; 2) measurement means experimentation under partial certainty; 3) measurement is made by special measurement instruments and tools (measurement information systems), where a leading part is played by some standards enabling preservation and reproduction of measurement units. In many cases measurements as information processes underlie knowledge acquisition [14]; besides, they are usually opposed to expert estimation based on human judgments. Let us point out some conceptual and formal differences between measurement and expert estimation. First of all, physical measurements give mainly structural, syntactic information, whereas expert estimates focus on semantic and/or pragmatic information aspects. Furthermore, the classical concept of scale is defined as a homomorphism from an empirical system to a numerical system [12]. Therefore, each relation in primary, illdefined system has some analogous relation in secondary, wellstructured system. Hence, a basic problem of classical measurement theory consists in specifying formal properties of empirical relations and operations and proving their isomorphism to corresponding relations and operations over numbers. Another basic problem is related to determining the type of measurement scale. Measurement errors theory proposed by C.F. Gauss was developed within the framework of probability theory and mathematical statistics. In this sense, classical measurement supposes the specification of mathematical space with probabilistic (additive) measure. Unfortunately, in many realworld applications the additivity axiom does not hold. Here some extensions of existing approach are needed, for example, based on DempsterShafer theory. In its turn, modern estimation theory deals with polymorphisms (i.e. onetomany or manytomany relations). A typical example of such nonclassical scales based on polymorphic relations is a fuzzy correspondence between linguistic values and theirs numerical representations in case of Zadeh’s linguistic variable [15] and its extensions [16]. An important specific feature of estimation theory is related to constructing generalized scales (this term was coined by D.A. Pospelov [17]). To differ from classical scales in case of generalized scales one may put in correspondence many objects to every point with various degrees. Measurement Uncertainty: Reasons and Types. Measurement is never exact. All measurement results have some error. In a number of international and national standards (see, for example, [18, 19]) the term «measurement error» has been replaced by a more general concept of «measurement uncertainty». The concept of uncertainty as a quantifiable attribute of measurement is relatively new and has various interpretations. On the one hand, the word «uncertainty» means doubt, and thus in its broadest sense «uncertainty of measurement» means doubt about the validity of the result of a measurement [18]. On the other hand, the uncertainty of the result of a measurement reflects the lack of exact knowledge of the value of the measurand. In any case, the result of every measurement has two components: measurand value and its estimated uncertainty. Formally uncertainty is a nonnegative parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand. In practice, there are many possible sources of uncertainty in a measurement [18, 19], including: a) incomplete or inaccurate definition of the measurand; b) instrument imperfection due to its finite resolution or discrimination threshold; c) inadequate knowledge of the effects of environmental conditions on the measurement results or imperfect measurement of environmental conditions; d) nonrepresentative sampling – the sample measured may not represent the defined measurand. Thus, uncertainty is inherent in all components of measurement information systems and processes. The experimental information concerning the measurement object and measurement conditions is inaccurate and incomplete. The measurement procedures are often based on rather restrictive and approximate suppositions, and the measurement instruments have limited resolution, in particular, for complex and dynamic object structures. To deal with these uncertainty factors, the probabilistic and statistical methodologies are usually applied; they consider such values as a confidence level, an error probability, the significance level. The criticism of the probabilistic approach is mainly reduced to the following remarks: 1) a unified representation of measurement uncertainty by random errors is not well justified; 2) in a few circumstances the postulation of errors normal distribution law is not realistic. So the idea of finding a more general framework for measurement theory, for instance, interval analysis and fuzzy set theory, seems quite reasonable. NonClassical Measurement Concepts. To formulate a generalized approach to the measurement uncertainty let us consider some new trends in nonclassical measurements: a) autonomous measurements, b) distributed measurements based on sensor networks; c) intelligent measurements; d) cognitive measurements; e) soft measurements. The arrival of these novel measurement concepts is related to the formulation of sophisticated measurementestimation tasks which are included into more general problems characterized by a high level of complexity and uncertainty. In such cases, a measurand Y is not measured directly, but is determined from n other quantities X1, X2, ..., Xn through a functional relationship Y=f (X1, X2, ..., Xn). Moreover, the estimations of different quantities may be obtained with different levels of accuracy: some parameters may be determined with higher accuracy and the others – with lower accuracy. Hence, procedures of fusion of measurement results obtained in various scales are of special concern. Typical examples of measurement metasystems are various monitoring problems, for instance, the problem of monitoring artificial structures (bridges, tunnels, and so on) in railway branch. Here monitoring based on measuring object’s and environment’s parameters (e.g. bridge deformation, wind strength, etc.) supposes a diagnostics of current object’s state, prognosis of its evolution and formation of control strategies (Figure 3). How to make a measurement intelligent? This question has various answers. In [20] the development of intelligent measurement information systems (MIS) supposes the creation of friendly user interface to open the opportunity of autonomous decisionmaking by MIS itself. Furthermore, in [21] intelligent measurement is related to the metrological knowledge engineering and use of applied intelligent systems (expert systems, fuzzy systems, multiagent organizations) in metrology. A possible classification of intelligent measurements is presented in Figure 4. The concept of «Soft Computing» was proposed by L.A. Zadeh in 1994 [22]. It is actually seen both as a conceptual foundation for Computational Intelligence [23] and as a consortium of advanced information technologies to construct new generation hybrid intelligent systems. Soft computing usually integrates fuzzy logics, neural networks and evolutionary computations; some other useful approaches and techniques such as probabilistic computing, chaos theory, machine learning are also seen as its intrinsic components. In 1997 A.N. Averkin and S.V. Prokopchina [24] introduced by analogy with Soft Computing the concept of «Soft Measurement» to deal with uncertainty, imprecision and fuzziness in measurements. Soft measurement extends probabilistic or/and fuzzy approaches to deal with nonspecificity of measured parameters. A canonic version of soft measurements is referred as Bayesian intelligent measurements which include probabilistic approaches, Bayesian decisionmaking criterion and fuzzy variables. Here a basic concept is a fuzzy linguistic scale that extends conventional measurement scales. The main idea consists in considering mapping between secondary linguistic scale and primary measurement scale with obtaining fuzzy measurements to be processed by soft computing techniques. The term «Cognitive Measurement» was introduced by S. Prokopchina in [25]. She makes the emphasis on generating measurement information networks and their transforming into knowledge networks with employing Bayesian intelligent technologies. We associate the concept of Cognitive Measurement with the process of hierarchical information granulation on the basis of measurementestimation unity principle [26]. According to this principle, we introduce a twoleveled measurement architecture, where the lower level is responsible for obtaining finegrained information by using an artificial sensor system, and the higher level transforms it into coarsegrained information related to a pragmatic scale of normative linguistic values such as «norm», «nearly norm», «out of norm», «far from norm» etc. Such measurements with words may use Zadeh’s Generalized Theory of Uncertainty [27], including the technique of generalized constraints propagation. In this context we share Kreinovich’s & Reznik’s viewpoint that any measurement is in essence a granulation process (see [28]). Generally we can speak about granular measurements with sensor data granulation engine. It is worth noticing that conventional measurement that assigns precise numbers to objects has instrumentcentric nature, whereas granular measurement associated with data mining and knowledge elicitation may be referred as anthropocentric measurement. Here the goal of measurement is not obtaining exact numerical information itself, but solving the practical problem by formulating some recommendations in a linguistic form. Measurement Granules, Granulation Process and Granular Pragmatics In this paper both the measurement process under uncertainty and interpretation of measurement results are associated with the construction of information/ knowledge granules. So we should answer the following questions: 1. What is granule and what is granular measurement? 2. Why it is suitable to use granules in measurements? 3. What is the difference of this approach with respect to conventional approaches to measurement, and which new ideas and results it contains? 4. Is it a good candidate for generalizing and unifying existing measurement practices? The term «granule» is originated from Latin word granum that means grain, to denote a small particle in the real world. This concept proposed by L. Zadeh expresses the levels of accuracy and uncertainty in measurement. Measurement information granules are complex dynamic entities which are formed to achieve some goal. Granulation procedures play a leading part in the design and implementation of intelligent measurement systems. In particular, by selecting different levels of granulation one can obtain different levels of knowledge. According to Zadeh, granule is a collection of objects (values) which are drawn together by indistinguishability, similarity, proximity or functionality [29]. It is easy to clarify the term «granular» by contrast to «singular». A classical measurement admits (at least as a special case) singleton measurand value. Contrarily, contemporary measurement science (see [18, 19]) agrees tacitly that both measured value and its uncertainty are granules. The capacity of granulation associated with operating measurement information on different abstraction levels is a key property of cognitive measurements. Here the term granulation encompasses both composition process (generation of coarsegrained information units) and decomposition process (formation of finegrained units). Elements of Granular Measurements Theory. An implementation of granular measurements requires the construction of granulation theory and development of measurement science. Here granulation includes construction, interpretation and representation of measurement granules, whereas the development of measurement theory is focused on nonclassical measurement concepts (see as above) and specifically supposes the interpretation of cognitive measurements as granular hierarchical structures. The heart of cognitive measurement is the formation of homogeneous and heterogeneous, multidimensional granules through onetomany links between primary measurements and their linguistic interpretations (Figure 5). Below we will consider a basic granular measurement procedure as pragmatic interpretation of earlier obtained sensor data. Let us consider some fundamentals of granulation theory [30, 31]. These are: basic granulation principles and criteria; ways of interpreting and classifying granules; granulation methods, techniques and procedures; formal models of granules and granular structures; mappings between different granulation levels; quantitative parameters of both granules and granulation process. 1. Measurement Granulation Principles and Criteria. Information granulation reflects a basic systems analysis principle: the modeling language for a complex system has to correspond to available information on this system. Following Zadeh, in granular measurement we employ the coarsest level of granulation which is consistent with the allowable level of imprecision. Here the level of measurement granulation may be given as the number of objects in the granule related to the total number of granules. Granulation criteria are directly associated with pragmatic interpretation of granules and answer the question why several measurement values are included into a granule. In many situations granules are induced by indiscernibility or order relations. 2. Interpretation and Classification of Measurement Granules. Measurement granules differ one from another by their nature, complexity, size, level, etc. Here crisp and fuzzy, simple and compound, homogeneous and heterogeneous granules may be specified. Classical granular notions in measurement/ estimation context refer to probability distribution, probability density distribution, confidence interval. Fuzzy measurement granules [32] encompass fuzzy intervals and fuzzy numbers, possibility measures and distributions, fuzzy mappings and linguistic hedges, ordinary and fuzzy cuts, etc. 3. Granulation Approaches, Methods and Procedures. The construction of measurement granules may be considered as either topdown or bottomup process. In the first case granule may be seen as a subset of universal set; its coverings and partitions are a natural way of granulation. In the second case we start with grouping singleton values to form a granule; then smallsized granules may be merged to generate a bigsized granule. For instance, a primary granule is constructed as a neighborhood of a point and then a pretopology of neighborhood system is taken. Granulation methods face the problem how to include measurement values into the same granule. It is necessary to build a variety of granulation algorithms to produce granules based on some granulation criterion. 4. Formal Models of Granules. Basic mathematical modules of granules are subsets, multisets, intervals, partitions, clusters, neighborhoods, rough sets, fuzzy sets, and so on. Below we will consider logicoalgebraic granulation models. Here logical granulation models are founded on multivalued, fuzzy and paraconsistent logics. A forerunner of these logics was N.A. Vasiliev who has proposed a twoleveled logical structure (empirical logic and metalogic) and introduced multidimensional (imaginary) logics [33] (threedimensional logical structures are illustrated by Vasiliev’s triangle depicted in Figure 6. By analogy we divide our twoleveled measurement structure into empirical measurement level and metameasurement (pragmatic estimation) level based on logicolinguistic estimates of obtained quantitative values. Cognitive Sensors: Granular Pragmatics in Measurements. We develop a pragmatic approach to cognitive measurements based on Peirce’s semiotics. To differ from logical semantics granular logical pragmatics makes emphasis on practical (situational) interpretation of acquired measurement data in terms of granules and granular structures. By measurement pragmatics we mean understanding, interpretation and use of measurement results for solving real problems of diagnostics, control, monitoring, etc. In other words, we associate with any measurement result m from the set of measurement results M an estimate vÎV interpreted as truth value or more generally a family of estimates. Let us consider the sensor able to measure some parameter of inspected object or process and interpret obtained information. Such a sensor equipped with threevalued «traffic lights» pragmatics will be referred as Vasiliev’s sensor. The data obtained from this sensor are granulated according to three values: T – «measured truth» («norm» – sensor data are located in a «green zone»); F – «measured falsity» («out of norm» – sensor data are located in a «red zone»); B – «measured ambiguity» («boundary situation» – sensor data are located in a «yellow zone»). In addition, we may take into consideration the case of sensor’s exhausted resources (it can occur in wireless sensor networks): N – «total uncertainty» (the sensor is sleeping). This fourvalued measurement pragmatics is illustrated in Figure 7a and 7b by Hasse diagrams for Belnap’s logical lattice L4 and Scott’s approximation (in our context, information) lattice A4. A cognitive sensor provided with fourvalued pragmatics is called here Belnap’s sensor. The logical matrix for Belnap’s sensor may be written in the form LMV4=á{T, B, N, F}, {Ø, Ù, Ú,®} {F}ñ, (1) where T, B, N, F are pragmatic truth, pragmatic ambiguity, total uncertainty and pragmatic falsity respectively, Ø, Ù, Ú, ® are main logical operations over truth values – negation, conjunction, disjunction, implication, F is the antidesignated value. The construction of granular logical pragmatics supposes using subsets as truth values. It corresponds to DunnBelnap’s strategy of constructing logical semantics, where both gaps and gluts were possible [34, 35]. Here the term «glut» stands for granular truth value «both true and false» (it means the refusal from singular truth values of classical logic) and the gap is viewed as none (neither true nor false) – it extends bivalence principle of classical logic. Let V be a set of logical truth values. Following Dunn, we shall take as truth values not only elements vÎV, but also any subsets of V, including empty set Æ. In other words, a direct transition from the set V to its power set 2V is performed. In his turn, L. Zadeh proposed further extension of this approach by introducing fuzzy truth values vfÎ[0,1]V and linguistic truth labels. Another way of logical granulation related to VasilievDunn’s ideas consists in constructing vector pragmatics, for instance, for any measurement result m from M we have an estimate v: M®[0,1]2, v(p)=(T(p), F(p)) (pragmatic truth and pragmatic falsity, i.e. «norm» and «out of norm», are considered here independently) or more generally v: M®[0,1]3, v(p)=(T(p), F(p), B(p)). (2) In the latter case, each measurement is characterized by degree of pragmatic truth, degree of pragmatic falsity and degree of pragmatic ambiguity (or paraconsistency). It is worth stressing that paraconsistent and vector pragmatics are of special interest for granular measurements. Granular Interpretations of MultiSensors Data Fusion. To generate logics of sensor networks we will use product logics constructed as product lattices and multilattices to form granules from sensor data. The logic of Belnap’s sensor network is based on the expression 4n, where n is an integer, n>1. A simple combination of two Belnap’s sensors gives 42=16 pragmatic values, a three sensors network – 43=64, etc. Let us take the problem of interpreting compound two Belnap’s sensors data by using bilattices [36, 37]. Informally, bilattice may be seen as a product lattice with two different ordering relations, where a link between these orders is given by a compound heterogeneous negation operation (see [38]) that extends Belnap’s negation. A bilattice may be given by a quadruple BL=áU, £v, £i , ØGñ, (3) where U = X´X is a product set, £v, £i are two order relations, for instance, truth order and information order, ØG is Ginsburg’s negation (it may be interpreted as seminegation, semiaffirmation operation [39]). It is obvious that the bilattice (3) can be viewed as an algebra with two different meet and join operations BL=áX´X, Ù, Ú, Ä, Å, Øñ, (3*) где 1) áX, Ù, Úñ and áX, Ä, Åñ are complete lattices; 2) Ø is a mapping Ø: X´X®X´X, such that: (а) Ø2=1; (b) Ø is a lattice homomorphism of áX, Ù, Úñ to áX, Ú, Ùñ and a lattice homomorphism of áX, Ä, Åñ. Finally, a logical bilattice [40] is a pair LBL=áBL, BFñ, where BL stands for a bilattice and BF is a prime bifilter on BL. Now let us interpret basic granular pragmatic truth values for two sensors in bilattices: · T1T2 – «measured concerted truth» (data issues from both sensors take norm values); · F1F2 – «measured concerted falsehood» (both sensors data are out of norm that witnesses for failure statе); · T1B2~B1T2 – «measured partial contradiction as the firstorder fault» (the first sensor shows a norm value and the second sensor indicates partial fault, and vice versa); · T1N2~N1T2 – «measured partial truth with uncertainty» (the first sensor indicates a norm value and the second sensor sleeps, and vice versa); · T1F2~F1T2 – «measured full contradiction» (the first sensor indicates a norm value, and the second sensor informs about failure state, and vice versa); · B1B2 – «measured concerted ambiguity» (the data from both sensors inform about first order fault); · N1N2 – «total uncertainty» (the resources оf both sensors are exhausted or both sensors sleep); · F1B2~B1F2 – «measured partial contradiction as the secondorder fault (the first sensor indicates an out of norm value and the second sensor indicates a fault), and vice versa; · F1N2~N1F2 – «measured partial falsehood with uncertainty (the first sensor indicates a failure value and the second sensor sleeps), and vice versa. All finite bilattices can be easily visualized by using double Hasse diagrams (Figures 8 and 9). The appropriate bilattice to illustrate the communication between two Belnap’s sensors is shown in Figure 10. In this figure we see that, for instance, T1T2>vT1B2>vT1F2>vB1F2>vF1F2v. In Belnap’s logic the values B and N are considered independently, therefore for its extension the values B1N2 и N1B2 are forbidden. So½V½=14, i.e. we obtain a 14valued pragmatics of two Belnap’s sensors communication (Figure 10). A corresponding map of faults is given in Figure 11. A Few Words about Fuzzy Vasiliev’s and Belnap’s Sensors. In many reallife problems the boundaries between green, yellow and red zones are dynamic and vague. Also in mining sensor networks with data issued from heterogeneous sensors we may use such characteristics of overall monitoring situation as «most of sensors data are located in green zone» or «considerable part of sensors indicate ambiguous data» or «a few sensors are sleeping». It requires the transition from crisp to shadowed [41] and fuzzy Vasiliev’s and Belnap’s sensors equipped with appropriate pragmatics. In case of Belnap’s sensors such a pragmatics may be given by a quadruple of fuzzy truth values defined on the unit interval: Vf(m)={Tf(m), Bf(m)), Nf(m), Ff(m)}, (4) where m is a measurement result, and Tf, Bf, Nf, Ff are fuzzy DunnBelnap’s truth values, Tf, Bf, Nf, FfÎ[0,1]. Paraconsistent semantics of fuzzy Belnaps logic was already studied in [42]. Our further research will be focused on combining Granular Measurements with Data Mining and Knowledge Discovery Approaches. Some sensor data fusion techniques, including granular logics based on trilattices [43] and quadrolattices, will be synthesized. In particular, the absence of strict boundaries between «traffic lights» pragmatics values and a wide use of linguistic hedges in specifying overall diagnostics/ control/ monitoring situation shows the need in fuzzy measurement information granulation. References 2. Finn V.K. Iskusstvennyy intellekt: Metodologiya, primeneniya, filosofiya [Artificial Intelligence: Methodology, Application, Philosophy]. Moscow, Editorial URSS Publ., 2011 (in Russ.). 3. Jamshidi M. Robotics, Automation, Control, and Manufacturing: Trends, Principles, and Applications. Proc. of the 5th Biannual World Automation Congress (WAC2002, Orlando, Florida, USA, June 9–13, 2002). Albuquerque, NM, TSI Press, 2002. 4. Nakashima H., Adhajan H., Augusto J.C. Handbook of Ambient Intelligence and Smart Environments. NY, Springer Verlag Publ., 2010. 5. Chong N.Y., Mastrogiovanni F. Handbook of Research on Ambient Intelligence and Smart Environments: Trends and Perspectives. IGI Global, NY, 2011. 6. Tarassov V.B. From Hybrid Systems to Ambient Intelligence. Proc. of the 1st Int. Symp. on Hybrid and Synergetic Intelligent Systems: Theory and Practice (HySIS’2012). Kaliningrad, Kant Baltic Fed. Univ. Publ., 2012, Part 1, pp. 42–54 (in Russ.). 7. Ackoff R. From Data to Wisdom. Journ. of Applied Systems Analysis, 1989, vol. 16, pp. 3–9. 8. Zeleny M. Human Systems Management: Integrating Knowledge, Management and Systems. World Scientific, 2005, pp. 15–16. 9. Pospelov D.A. Intelligent Interfaces for New Generation Computers. Computer Science. Moscow, Radio i Svyaz Publ., 1989, iss. 3, pp. 4–20 (in Russ.). 10. Zadeh L.A. From Computing with Numbers to Computing with Words – From Manipulation of Measurements to Manipulation of Perceptions. Computing with Words. NY, Wiley and Sons Publ., 2001, pp. 35–68. 11. Pfanzagl J. Theory of Measurement. Wuerzburg, PhysicaVerlag, 1968. 12. Krantz D.R., Luce R.D., Suppes P., Tversky A. Foundations of Measurements. NY, Academic Press, 1971, vol. 1; 1990, vol. 2 and 3. 13. Potter R. The Art of Measurement. Englewood Cliffs, NJ, Prentice Hall, 2000. 14. Reznik L., Kreinovich V. Soft Computing in Measurement and Information Acquisition. NY, SpringerVerlag, 2003. 15. Zadeh L.A. The Concept of Linguistic Variable and its Application to Approximate Reasoning. Part 1 and 2. Information Sciences. 1975, vol. 8, pp. 199–249, 301–357. 16. Tarassov V.B. Development of Synergistic Approach in Psychology and Artificial Intelligence: New Horizons of Mathematical Psychology. Matematicheskaya psikhologiya: shkola V.Yu. Krylova [Mathematical Psychology: Krylov’s School]. Moscow, Institute of Psychology of RAS Publ., 2010, pp. 117–156 (in Russ.) 17. Pospelov D.A. The Grey and/or theBlackandWhite. Applied Ergonomics. Special Issue on Reflexive Processes. Moscow, AEA Publ., 1994, pp. 25–29. 18. Guide to the Expression of Uncertainty in Measurement (GUM). International Organization of Standardization, Geneva, 1995. 20. Finkelstein L. Intelligent and KnowledgeBased Instrumentation. An Examination of Basic Concepts. Measurement. 1994, vol. 14, pp. 23–30. 21. Reznik L.P., Dabke K.P. Measurement Models: Application of Intelligent Methods. Measurement. 2004, vol. 35, pp. 47–58. 22. Zadeh L.A. Fuzzy Logic, Neural Network and Soft Computing. Communications of the ACM. 1994, vol. 37, no. 3, pp. 77–84. 23. Pedrycz W. Computational Intelligence: an Introduction. Boca Raton, CRC Press, 1997. 24. Averkin A.N., Prokopchina S.V. Soft Computing and Measurements. Intellektualnye sistemy [Intelligent Systems]. Moscow, MSU Publ., 1997, vol. 2, no. 1–4, pp. 93–114 (in Russ.). 25. Prokopchina S.V. Cognitive Measurements on the Basis of Bayesian Intelligent Tecknologies. Proc. of the 13h Int. Conf. on Soft Computing and Measurements (SCM’2010, St. Petersburg, June 2325, 2010). St. Petersburg, LETI Publ., 2010, pp. 28–34 (in Russ.). 26. Tarassov V.B. On Granular Measurement Structures in Ambient Intelligence and Smart Environments. Measurement Information and Control Systems. 2013, vol. 2 (in Russ.). 27. Zadeh L.A. Toward a Generalized Theory of Uncertainty (GTU): an Outline. Information Sciences – Informatics and Computer Science. 2005, vol. 172, no. 1–2, pp. 1–40. 28. Reznik L. Measurement Theory and Uncertainty in Measurements: Application of Interval Analysis and Fuzzy Sets Methods. Handbook of Granular Computing. Chichester UK, John Wiley and Sons, 2008, pp. 517–532. 29. Zadeh L.A. Toward a Theory of Fuzzy Information Granulation and its Centrality in Human Reasoning and Fuzzy Logic. Fuzzy Sets and Systems, 1997, vol. 90, pp. 111–127. 30. Bargiela A., Pedrycz W. Granular Computing: an Introduction. Dordrecht, Kluwer Academic Publ., 2003. 31. Tarassov V.B. Information Granulation by Cognitive Agents and NonStandard Fuzzy Sets. Proc. of the 6th Int. Conf. onSoft Computing, Computing with Words and Perceptions in System Analysis, Decision and Control (ICSCCW’2011, Antalya, Turkey, September 12, 2011). Kaufering, bQuadrat Verlag Publ., 2011, pp. 59–74. 32. Mauris G., Lasserre V., Foulloy L. Fuzzy Modeling of Measurement Data Acquired from Physical Sensors. IEEE Transactions on Instrumentation and Measurement. 2000, vol. 49, no. 6, pp. 1201–1205. 33. Vasiliev N.A. Voobrazhaemaya logika [Imaginary Logis]. Moscow, Nauka Publ., 1989. 34. Dunn J.M. Intuitive Semantics for FirstDegree Entailment and «Coupled Trees». Philosophical Studies. 1976, vol. 29, pp. 149–168. 35. Belnap N. A Useful FourValued Logic. Modern Uses of MultipleValued Logic. Dordrecht, 1977, pp. 8–37. 36. Ginsberg M. MultiValued Logics: a Uniform Approach to Reasoning in AI. Computer Intelligence. 1988, vol. 4, pp. 256–316. 37. Fitting M. Bilattices and the Theory of Truth. Journ. of Philosophical Logic. 1989, vol. 19, pp. 225–256. 38. Tarassov V.B. Selecting Negations in Soft Computing: a LinguaLogical Approach. Proc. of the 4th int. conf. on Soft Computing, Computing with Words and Perceptions in System Analysis, Decision and Control (ICSCCW’2007, Antalya, Turkey, August 2728, 2007). Kaufering, bQuadrat Verlag Publ., 2007, pp. 110–124. 39. Tarassov V.B. Lattice Products, Bilattices and Some Extensions of Negations, Triangular Norms and Triangular Conorms. Proc. of the int. conf. on Fuzzy Sets and Soft Computing in Economics and Finance (FSSCEF’2004, SaintPetersburg, June 17–20, 2004). Mexico, IPM, 2004, vol. 1, pp. 272–282. 40. Arieli O., Avron A. Reasoning with Logical Bilattices. Journ. of Logic, Language and Information. 1996, vol. 5, no. 1, pp. 25–63. 41. Pedrycz W. Shadowed Sets: Representing and Processing Fuzzy Sets. IEEE Transactions on Systems, Man and Cybernetics, B. 1998, vol. 28, pp. 103–109. 42. Turunen E., Ozturk M., Tsoukias A. Paraconsistent Semantics for Pavelka Style Fuzzy Sentential Logic. Fuzzy Sets and Systems. 2010, vol. 161, no. 14, pp. 1926–1940. 43. Shramko Y., Dunn J. M., Takenaka T. The Trilattice of Constructive Truth Values. Journ. of Logic and Computation. 2001, vol. 11, pp. 761–788. 
Permanent link: http://swsys.ru/index.php?page=article&id=3661&lang=en 
Print version Full issue in PDF (7.95Mb) Download the cover in PDF (1.45Мб) 
The article was published in issue no. № 4, 2013 [ pp. 7482 ] 
Back to the list of articles