Cyber Security - Policy-Based Information Management Systems Or IMS Solutions.



Several publications that chronicle the history and development of the PBM paradigm can be found in the present literature. 

The early publications devoted to this paradigm emphasized the importance of security concerns. 

In this sense, the first policies in charge of determining the rules by which access control systems were governed were security policies. 

Access control methods determine whether a resource's access should be permitted or prohibited based on the system administrator's security rules. 

Different access security levels may be assigned to security policies, each with its own set of criteria for determining what should and should not be permitted. 

To safeguard diverse things on shared computers, the access control matrix was created. 

Objects shared across domains, such as files, memory, and terminals, were secured by various access privileges. 

Each matrix element had a collection of access characteristics that defined the domain's access rights to the objects. 

Read, write, owner, call, and control are all examples of attributes. 

Another method provided as an alternate way to the access control matrix was Access Control Lists (ACLs), which presented the matrix information in a column format. 

Confidentiality and integrity are two crucial components of the security problem that IMSs take into account. 

The first formal model to prohibit unauthorized exposure of information was the confidentiality policy model. 

This concept was designed to formalize the Department of Defense's security. 

It was based on the state machine idea, in which a specified set of access control rules were organized by states and transition functions were used to connect them. 

The levels of security varied from the most sensitive (Top Secret) to the least sensitive (Top Secret) (Public). 

Users could only access and produce material if their security level was equal to or higher than their own. 

Finally, despite the fact that role-based access control (RBAC) is not directly related to policy definition, it has been acknowledged as a security paradigm for defining and enforcing organizational access control rules. 

RBAC is a security method that separates user assignment to roles and enables for authorization management. 

Information integrity, on the other hand, is the assurance that data has not been tampered with or corrupted by malevolent users or system failures. 

Integrity rules, in this sense, explain how the system's information's validity should be maintained. 

The first integrity policy model outlined the factors that must be taken into account when designing safe computer systems. 

Another important feature of the security area is privacy. 

Individuals and organizations have the opportunity to regulate the conditions under which their personal information is collected and used. 

Many works based on privacy rules and aimed to safeguard sensitive information have been developed in recent years. 

Business practice and legal issues drive privacy policies, which are long-term pledges given by an organization to its end users. 

In this regard, in, a system was presented that makes privacy policy writing, implementation, and compliance monitoring easier. 

A framework was presented that could describe and enforce privacy standards by providing a formal definition of purpose and offering a modal-logic language for explicitly expressing purpose limitations. 

Enterprise Privacy Practices (E-PP) is a platform that specifies technologies for privacy-enabled consumer data management and communication. 

This approach distinguishes between an enterprise-specific deployment policy and a privacy policy that spans the whole data life cycle. 

In terms of evaluating the effectiveness of privacy policies, several tests were conducted to see if apps with privacy policies were more likely to protect personal information than apps without privacy policies, as well as an empirical study of online privacy policies and tools for users with privacy concerns. 

The link between privacy policies and user responses was researched in the e-commerce arena, revealing that privacy risks transmit the impacts of a privacy policy's contents on user behavior. 

Various security theories and techniques that have been used to assure computer security in recent years are investigated. 

The authors of this research also go through a number of security principles and the models that are used to guarantee that these security standards are followed. 

Various higher-level and complicated models are built on the foundation of security models (such as access control models and protection rings). 

An authentication and confidentiality technique based on homomorphic encryption and directed to Mobile Cloud Computing (MCC) was described. 

This work also includes a recovery method for mobile users to get secure access to faraway multi-cloud servers. 

The authors have supplied a prototype of the suggested framework to show its robustness, efficiency, and security. 

Privacy and security are also important considerations in the Internet of Things (IoT) paradigm. 

The security problems of IoT and MCC technologies are researched and assessed in this regard in. 

According to the authors of this paper, cloud computing technology enhances the IoT's functionality. 

In recent years, the networking paradigm has attracted interest from both corporate and academic research communities as one of the many current study subjects where policies may be implemented to simplify managerial responsibilities. 

Several proposals aimed at policy-based network management (PBNM) with various goals can be found in the literature. 

In, a methodology for refining rules in PBM systems was presented for network security and privacy. 

To turn high-level aims into low-level policies, the refining process followed a set of phases. 

A policy-based access control system was proposed in another proposal, which prevented detrimental interference caused by faulty equipment or malevolent users. 

A set of policy-based components was created for this solution, which was combined with the algorithms used by software-defined radios to identify interference produced by malfunctioning devices. 

The suggested policy-based components assured that a radio did not deviate from policy standards. 

This research also provided safe policy administration and dissemination systems to prevent malevolent users from adding or changing policies that already exist. 

FRESCO was an OpenFlow security application development framework that made it easier to create detection and mitigation modules quickly and modularly. 

The compatibility of the Destination Addressing Control System (DACS) scheme for the cloud environment with virtualization technology was investigated in terms of network communications privacy. 

They specifically advocated including the DACS into the PBNM in order to control network resources through policies. 

Ethane was another system that used the Flow-based Security Language to enable network administrators to specify access control rules (FSL). 

With the introduction of SDN in recent years, network management has become more dynamic. 

OpenSec was, in this sense, an OpenFlow-based framework that enabled network administrators to develop and enforce security rules. 

These regulations established which security services must be used and the security levels that govern how OpenSec responds to hostile traffic. 

The efforts of business and academia in recent years have also been concentrated on energy-efficient network techniques. 

A deep survey with a deep comparison of a number of energy-efficient network approaches can be found in the current literature in this regard. 

The power consumption of mobile communication systems was quantified in the solution published in. 

It was discovered that enhancing the energy efficiency of base stations (BSs) with low traffic loads has a large potential for lowering energy usage. 

In, another idea was presented to improve the efficiency of power amplifiers for wireless BSs. 

In, the variance in traffic patterns over time was analyzed in order to determine whether BSs should sleep or not. 

Another option was to use a control system that allowed tiny cells to turn off all components when they weren't servicing active connections. 

A transfer actor-critic algorithm (TACT) was devised to speed up the decision process of turning on/off the BSs by using past data from surrounding areas. 

The usage of software routers for mimicking network equipment capabilities was suggested, as well as the advantages of the emulation environment. 

Finally, in, another approach aimed at improving energy efficiency was proposed. 

It was specifically characterized as a policy-based framework for resource management in fog computing. 

This solution took the fog computing idea a step further by allowing for safe cooperation and interoperability across various user-requested services. 

To demonstrate the importance of policy management as a basic security management module in a fog environment, many scenarios were given. 

Another major subject in networks is Quality of Service (QoS), for which numerous policy-based solutions have been developed. 

Procera was an event-driven network control framework that managed and configured the network state using high-level rules. 

This technique allowed the controller to manage the network state using dynamic policies that were converted into a set of forwarding rules. 

A novel goal function for IPv Routing Protocol for Low Power and Lossy Networks (RPL) based on Fuzzy Logic was proposed to address the constraints of the established objective functions, allowing the optimal pathways to the destination to be selected. 

In, a suggestion was presented for a new transfer learning approach for spectrum management in cognitive radio networks to enhance QoS. 

Transfer learning offers much greater QoS and throughput than dispersed reinforcement learning, according to this approach. 

When several policies coexist, conflict resolution is a critical part of operating a policy-based system. 

When implementing rules to offer QoS, the issue of conflict resolution is handled. 

PolicyCop, a QoS policy management system based on SDN and aimed at OpenFlow, followed the SDN methodology. 

PolicyCop enabled the definition of QoS service level agreements (SLAs) and policy enforcement control. 

This approach also kept track of the network's state and used rules to change the network's settings.



~ Jai Krishna Ponnappan

Find Jai on Twitter | LinkedIn | Instagram


You may also want to read and learn more Technology and Engineering here.

You may also want to read and learn more Solar and Other Green Renewable Energy Systems here.


References & Further Reading:



1. OSI. Information Processing Systems-Open System Inteconnection-Systems Management Overview. ISO 10040, 1991.

2. Jefatura del Estado. Ley Orgánica de Protección de Datos de Carácter Personal. www.boe.es/boe/dias/1999/12/14/pdfs/A43088-43099.pdf.

3. D. W. Samuel, and D. B. Louis. The right to privacy. Harvard Law Review, 4(5): 193–220, 1890.

4. A. Westerinen, J. Schnizlein, J. Strassner, M. Scherling, B. Quinn, S. Herzog, A. Huynh, M. Carlson, J. Perry, and S. Waldbusser. Terminology for Policy-Based Management. IETF Request for Comments 3198, November 2001.

5. B. Moore. Policy Core Information Model (PCIM) Extensions. IETF Request for Comments 3460, January 2003.

6. S. Godik, and T. Moses. OASIS EXtensible Access Control Markup Language (XACML). OASIS Committee Specification, 2002.

7. A. Dardenne, A. Van Lamsweerde and S. Fickas. Goal-directed requirements acquisition. Science of Computer Programming, 20(1–2): 3–50, 1993.

8. F. L. Gandon, and N. M. Sadeh. Semantic web technologies to reconcile privacy and context awareness. Web Semantics: Science, Services and Agents on the World Wide Web, 1(3): 241–260, April 2004.

9. I. Horrocks. Ontologies and the semantic web. Communications ACM, 51(12): 58–67, December 2008.

10. R. Boutaba and I. Aib. Policy-based management: A historical perspective. Journal of Network and Systems Management, 15(4): 447–480, 2007.

11. P. A. Carter. Policy-Based Management, In Pro SQL Server Administration, pages 859–886. Apress, Berkeley, CA, 2015.

12. D. Florencio, and C. Herley. Where do security policies come from? In Proceedings of the 6th Symposium on Usable Privacy and Security, pages 10:1–10:14, 2010.

13. K. Yang, and X. Jia. DAC-MACS: Effective data access control for multi-authority Cloud storage systems, IEEE Transactions on Information Forensics and Security, 8(11): 1790–1801, 2014.

14. B. W. Lampson. Dynamic protection structures. In Proceedings of the Fall Joint Computer Conference, pages 27–38, 1969.

15. B. W. Lampson. Protection. ACM SIGOPS Operating Systems Review, 8(1): 18–24, January 1974.

16. D. E. Bell and L. J. LaPadula. Secure Computer Systems: Mathematical Foundations. Technical report, DTIC Document, 1973.

17. D. F. Ferraiolo, and D. R. Kuhn. Role-based access controls. In Proceedings of the 15th NIST-NCSC National Computer Security Conference, pages 554–563, 1992.

18. V. P. Astakhov. Surface integrity: Definition and importance in functional performance, In Surface Integrity in Machining, pages 1–35. Springer, London, 2010.

19. K. J. Biba. Integrity Considerations for Secure Computer Systems. Technical report, DTIC Document, 1977.

20. M. J. Culnan, and P. K. Armstrong. Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science, 10(1): 104–115, 1999.

21. A. I. Antón, E. Bertino, N. Li, and T. Yu. A roadmap for comprehensive online privacy policy management. Communications ACM, 50(7): 109–116, July 2007.

22. J. Karat, C. M. Karat, C. Brodie, and J. Feng. Privacy in information technology: Designing to enable privacy policy management in organizations. International Journal of Human Computer Studies, 63(1–2): 153–174, 2005.

23. M. Jafari, R. Safavi-Naini, P. W. L. Fong, and K. Barker. A framework for expressing and enforcing purpose-based privacy policies. ACM Transaction Information Systesms Security, 17(1): 3:1–3:31, August 2014.

24. G. Karjoth, M. Schunter, and M. Waidner. Platform for enterprise privacy practices: Privacy-enabled management of customer data, In Proceedings of the International Workshop on Privacy Enhancing Technologies, pages 69–84, 2003.

25. S. R. Blenner, M. Kollmer, A. J. Rouse, N. Daneshvar, C. Williams, and L. B. Andrews. Privacy policies of android diabetes apps and sharing of health information. JAMA, 315(10): 1051–1052, 2016.

26. R. Ramanath, F. Liu, N. Sadeh, and N. A. Smith. Unsupervised alignment of privacy policies using hidden Markov models. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 605–610, June 2014.

27. J. Gerlach, T. Widjaja, and P. Buxmann. Handle with care: How online social network providers’ privacy policies impact users’ information sharing behavior. The Journal of Strategic Information Systems, 24(1): 33–43, 2015.

28. O. Badve, B. B. Gupta, and S. Gupta. Reviewing the Security Features in Contemporary Security Policies and Models for Multiple Platforms. In Handbook of Research on Modern Cryptographic Solutions for Computer and Cyber Security, pages 479–504. IGI Global, Hershey, PA, 2016.

29. K. Zkik, G. Orhanou, and S. El Hajji. Secure mobile multi cloud architecture for authentication and data storage. International Journal of Cloud Applications and Computing 7(2): 62–76, 2017.

30. C. Stergiou, K. E. Psannis, B. Kim, and B. Gupta. Secure integration of IoT and cloud computing. In Future Generation Computer Systems, 78(3): 964–975, 2018.

31. D. C. Verma. Simplifying network administration using policy-based management. IEEE Network, 16(2): 20–26, March 2002.

32. D. C. Verma. Policy-Based Networking: Architecture and Algorithms. New Riders Publishing, Thousand Oaks, CA, 2000.

33. J. Rubio-Loyola, J. Serrat, M. Charalambides, P. Flegkas, and G. Pavlou. A methodological approach toward the refinement problem in policy-based management systems. IEEE Communications Magazine, 44(10): 60–68, October 2006.

34. F. Perich. Policy-based network management for next generation spectrum access control. In Proceedings of International Symposium on New Frontiers in Dynamic Spectrum Access Networks, pages 496–506, April 2007.

35. S. Shin, P. A. Porras, V. Yegneswaran, M. W. Fong, G. Gu, and M. Tyson. FRESCO: Modular composable security services for Software-Defined Networks. In Proceedings of the 20th Annual Network and Distributed System Security Symposium, pages 1–16, 2013.

36. K. Odagiri, S. Shimizu, N. Ishii, and M. Takizawa. Functional experiment of virtual policy based network management scheme in Cloud environment. In International Conference on Network-Based Information Systems, pages 208–214, September 2014.

37. M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker. Ethane: Taking control of the enterprise. In Proceedings of Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, pages 1–12, August 2007.

38. M. Wichtlhuber, R. Reinecke, and D. Hausheer. An SDN-based CDN/ISP collaboration architecture for managing high-volume flows. IEEE Transactions on Network and Service Management, 12(1): 48–60, March 2015.

39. A. Lara, and B. Ramamurthy. OpenSec: Policy-based security using Software-Defined Networking. IEEE Transactions on Network and Service Management, 13(1): 30–42, March 2016.

40. W. Jingjin, Z. Yujing, M. Zukerman, and E. K. N. Yung. Energy-efficient base stations sleep-mode techniques in green cellular networks: A survey. IEEE Communications Surveys Tutorials, 17(2): 803–826, 2015.

41. G. Auer, V. Giannini, C. Desset, I. Godor, P. Skillermark, M. Olsson, M. A. Imran, D. Sabella, M. J. Gonzalez, O. Blume, and A. Fehske. How much energy is needed to run a wireless network?IEEE Wireless Communications, 18(5): 40–49, 2011.

42. W. Yun, J. Staudinger, and M. Miller. High efficiency linear GaAs MMIC amplifier for wireless base station and Femto cell applications. In IEEE Topical Conference on Power Amplifiers for Wireless and Radio Applications, pages 49–52, January 2012.

43. M. A. Marsan, L. Chiaraviglio, D. Ciullo, and M. Meo. Optimal energy savings in cellular access networks. In IEEE International Conference on Communications Workshops, pages 1–5, June 2009.

44. H. Claussen, I. Ashraf, and L. T. W. Ho. Dynamic idle mode procedures for femtocells. Bell Labs Technical Journal, 15(2): 95–116, 2010.

45. L. Rongpeng, Z. Zhifeng, C. Xianfu, J. Palicot, and Z. Honggang. TACT: A transfer actor-critic

learning framework for energy saving in cellular radio access networks. IEEE Transactions on Wireless Communications, 13(4): 2000–2011, 2014.

46. G. C. Januario, C. H. A. Costa, M. C. Amarai, A. C. Riekstin, T. C. M. B. Carvalho, and C. Meirosu. Evaluation of a policy-based network management system for energy-efficiency. In IFIP/IEEE International Symposium on Integrated Network Management, pages 596–602, May 2013.

47. C. Dsouza, G. J. Ahn, and M. Taguinod. Policy-driven security management for fog computing: Preliminary framework and a case study. In Conference on Information Reuse and Integration, pages 16–23, August 2014.

48. H. Kim and N. Feamster. Improving network management with Software Defined Networking. IEEE Communications Magazine, 51(2): 114–119, February 2013.

49. O. Gaddour, A. Koubaa, and M. Abid. Quality-of-service aware routing for static and mobile IPv6-based low-power and loss sensor networks using RPL. Ad Hoc Networks, 33: 233–256, 2015.

50. Q. Zhao, D. Grace, and T. Clarke. Transfer learning and cooperation management: Balancing the quality of service and information exchange overhead in cognitive radio networks. Transactions on Emerging Telecommunications Technologies, 26(2): 290–301, 2015.

51. M. Charalambides, P. Flegkas, G. Pavlou, A. K. Bandara, E. C. Lupu, A. Russo, N. Dulav, M. Sloman, and J. Rubio-Loyola. Policy conflict analysis for quality of service management. In Proceedings of the 6th IEEE International Workshop on Policies for Distributed Systems and Networks, pages 99–108, June 2005.

52. M. F. Bari, S. R. Chowdhury, R. Ahmed, and R. Boutaba. PolicyCop: An autonomic QoS policy enforcement framework for software defined networks. In 2013 IEEE SDN for Future Networks and Services, pages 1–7, November 2013.

53. C. Bennewith and R. Wickers. The mobile paradigm for content development, In Multimedia and E-Content Trends, pages 101–109. Vieweg+Teubner Verlag, 2009.

54. I. A. Junglas, and R. T. Watson. Location-based services. Communications ACM, 51(3): 65–69, March 2008.

55. M. Weiser. The computer for the 21st century. Scientific American, 265(3): 94–104, 1991.

56. G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles. Towards a better understanding of context and context-awareness. In Handheld and Ubiquitous Computing, pages 304–307, September 1999.

57. B. Schilit, N. Adams, and R. Want. Context-aware computing applications. In Proceeding of the 1st Workshop Mobile Computing Systems and Applications, pages 85–90, December 1994.

58. N. Ryan, J. Pascoe, and D. Morse. Enhanced reality fieldwork: The context aware archaeological assistant. In Proceedings of the 25th Anniversary Computer Applications in Archaeology, pages 85–90, December 1997.

59. A. K. Dey. Context-aware computing: The CyberDesk project. In Proceedings of the AAAI 1998 Spring Symposium on Intelligent Environments, pages 51–54, 1998.

60. P. Prekop and M. Burnett. Activities, context and ubiquitous computing. Computer Communications, 26(11): 1168–1176, July 2003.

61. R. M. Gustavsen. Condor-an application framework for mobility-based context-aware applications. In Proceedings of the Workshop on Concepts and Models for Ubiquitous Computing, volume 39, September 2002.

62. C. Tadj and G. Ngantchaha. Context handling in a pervasive computing system framework. In 

Proceedings of the 3rd International Conference on Mobile Technology, Applications and Systems, 

pages 1–6, October 2006.

63. S. Dhar and U. Varshney. Challenges and business models for mobile location-based services and advertising. Communications ACM, 54(5): 121–128, May 2011.

64. F. Ricci, L. Rokach, and B. Shapira. Recommender Systems: Introduction and Challenges, pages In Recommender Systems Handbook, pages 1–34. Springer, Boston, MA, 2015.

65. J. B. Schafer, D. Frankowski, J. Herlocker, and S. Sen. Collaborative Filtering Recommender Systems, In The Adaptive Web, pages 291–324. Springer, Berlin, Heidelberg, 2007.

66. P. Lops, M. de Gemmis, and G. Semeraro. Content-Based Recommender Systems: State of the Art and Trends, In Recommender Systems Handbook, pages 73–105. Springer, Boston, MA, 2011.

67. D. Slamanig and C. Stingl. Privacy aspects of eHealth. In Proceedings of Conference on Availability, Reliability and Security, pages 1226–1233, March 2008.

68. C. Wang. Policy-based network management. In Proceedings of the International Conference on Communication Technology, volume 1, pages 101–105, 2000.

69. R. Want, A. Hopper, V. Falcao, and J. Gibbons. The active badge location system. ACM Transactions on Information Systems, 10(1): 91–102, January 1992.

70. K. R. Wood, T. Richardson, F. Bennett, A. Harter, and A. Hopper. Global teleporting with Java: Toward ubiquitous personalized computing. Computer, 30(2): 53–59, February 1997.

71. C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos. Context aware computing for the Internet of Things: A survey. IEEE Communications Surveys Tutorials, 16(1): 414–454, 2014.

72. B. Guo, L. Sun, and D. Zhang. The architecture design of a cross-domain context management system. In Proceedings of Conference Pervasive Computing and Communications Workshops, pages 499–504, April 2010.

73. A. Badii, M. Crouch, and C. Lallah. A context-awareness framework for intelligent networked embedded systems. In Proceedings of Conference on Advances in Human-Oriented and Personalized Mechanisms, Technologies and Services, pages 105–110, August 2010.

74. S. Pietschmann, A. Mitschick, R. Winkler, and K. Meissner. CroCo: Ontology-based, crossapplication context management. In Proceedings of Workshop on Semantic Media Adaptation and Personalization, pages 88–93, December 2008.

75. T. Gu, X. H. Wang, H. K. Pung, and D. Q. Zhang. An ontology-based context model in intelligent environments. In Proceedings of Communication Networks and Distributed Systems Modeling and Simulation Conference, pages 270–275, January 2004.

76. H. Chen, T. Finin, and A. Joshi. An ontology for context-aware pervasive computing environments. The Knowledge Engineering Review, 18(03): 197–207, September 2003.

77. D. Ejigu, M. Scuturici, and L. Brunie. CoCA: A collaborative context-aware service platform for pervasive computing. In Proceedings of Conference Information Technologies, pages 297–302, April 2007.

78. R. Yus, E. Mena, S. Ilarri, and A. Illarramendi. SHERLOCK: Semantic management of location based services in wireless environments. Pervasive and Mobile Computing, 15: 87–99, 2014.

79. L. Tang, Z. Yu, H. Wang, X. Zhou, and Z. Duan. Methodology and tools for pervasive application development. International Journal of Distributed Sensor Networks, 10(4): 1–16, 2014.

80. B. Bertran, J. Bruneau, D. Cassou, N. Loriant, E. Balland, and C. Consel. DiaSuite: A tool suite to develop sense/compute/control applications. Science of Computer Programming, 79: 39–51, 2014.

81. P. Jagtap, A. Joshi, T. Finin, and L. Zavala. Preserving privacy in context-aware systems. In Proceedings of Conference on Semantic Computing, pages 149–153, September 2011.

82. V. Sacramento, M. Endler, and F. N. Nascimento. A privacy service for context-aware mobile computing. In Proceedings of Conference on Security and Privacy for Emergency Areas in Communication Networks, pages 182–193, September 2005.

83. A. Huertas Celdrán, F. J. García Clemente, M. Gil Pérez, and G. Martínez Pérez. SeCoMan: A 

semantic-aware policy framework for developing privacy-preserving and context-aware smart applications. IEEE Systems Journal, 10(3): 1111–1124, September 2016.

84. J. Qu, G. Zhang, and Z. Fang. Prophet: A context-aware location privacy-preserving scheme in location sharing service. Discrete Dynamics in Nature and Society, 2017, 1–11, Article ID 6814832, 2017.

85. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. PRECISE: Privacy-aware recommender based on context information for Cloud service environments. IEEE Communications Magazine, 52(8): 90–96, August 2014.

86. S. Chitkara, N. Gothoskar, S. Harish, J.I. Hong, and Y. Agarwal. Does this app really need my location? Context-aware privacy management for smartphones. In Proceedings of the ACM Interactive Mobile, Wearable and Ubiquitous Technologies, 1(3): 42:1–42:22, September 2017.

87. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. What private information are you disclosing? A privacy-preserving system supervised by yourself. In Proceedings of the 6th International Symposium on Cyberspace Safety and Security, pages 1221–1228, August 2014.

88. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. MASTERY: A multicontext-aware system that preserves the users’ privacy. In IEEE/IFIP Network Operations and Management Symposium, pages 523–528, April 2016.

89. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Preserving patients’ privacy in health scenarios through a multicontext-aware system. Annals of Telecommunications, 72(9–10): 577–587, October 2017.

90. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Policy-based management for green mobile networks through software-defined networking. Mobile Networks and Applications, In Press, 2016.

91. A. Huertas Celdrán, M. Gil Pérez, F. J. García Clemente, and G. Martínez Pérez. Enabling highly dynamic mobile scenarios with software defined networking. IEEE Communications Magazine, Feature Topics Issue on SDN Use Cases for Service Provider Networks, 55(4): 108–113, April 2017. 


Artificial Intelligence - Machine Translation.

  



Machine translation is the process of using computer technology to automatically translate human languages.

The US administration saw machine translation as a valuable instrument in diplomatic attempts to restrict communism in the USSR and the People's Republic of China from the 1950s through the 1970s.

Machine translation has lately become a tool for marketing goods and services in countries where they would otherwise be unavailable due to language limitations, as well as a standalone offering.

Machine translation is also one of the litmus tests for artificial intelligence progress.

This artificial intelligence study advances along three broad paradigms.

Rule-based expert systems and statistical methods to machine translation are the earliest.

Neural-based machine translation and example-based machine translation are two more contemporary paradigms (or translation by analogy).

Within computer linguistics, automated language translation is now regarded an academic specialization.

While there are multiple possible roots for the present discipline of machine translation, the notion of automated translation as an academic topic derives from a 1947 communication between crystallographer Andrew D. Booth of Birkbeck College (London) and Warren Weaver of the Rockefeller Foundation.

"I have a manuscript in front of me that is written in Russian, but I am going to assume that it is truly written in English and that it has been coded in some bizarre symbols," Weaver said in a preserved note to colleagues in 1949.

To access the information contained in the text, all I have to do is peel away the code" (Warren Weaver, as cited in Arnold et al. 1994, 13).

Most commercial machine translation systems have a translation engine at their core.

The user's sentences are parsed several times by translation engines, each time applying algorithmic rules to transform the source sentence into the desired target language.

There are rules for word-based and phrase-based trans formation.

The initial objective of a parser software is generally to replace words using a two-language dictionary.

Additional processing rounds of the phrases use comparative grammatical rules that consider sentence structure, verb form, and suffixes.

The intelligibility and accuracy of translation engines are measured.

Machine translation isn't perfect.

Poor grammar in the source text, lexical and structural differences between languages, ambiguous usage, multiple meanings of words and idioms, and local variations in usage can all lead to "word salad" translations.

In 1959–60, MIT philosopher, linguist, and mathematician Yehoshua Bar-Hillel issued the harshest early criticism of machine translation of language.

In principle, according to Bar-Hillel, near-perfect machine translation is impossible.

He used the following sentence to demonstrate the issue: John was on the prowl for his toy box.

He eventually discovered it.

In the pen, there was a box.

John was overjoyed.

The word "pen" poses a problem in this statement since it might refer to a child's playpen or a writing ballpoint pen.

Knowing the difference necessitates a broad understanding of the world, which a computer lacks.

When the National Academy of Sciences Automatic Language Processing Advisory Committee (ALPAC) released an extremely damaging report about the poor quality and high cost of machine translation in 1964, the initial rounds of US government funding eroded.

ALPAC came to the conclusion that the country already had an abundant supply of human translators capable of producing significantly greater translations.

Many machine translation experts slammed the ALPAC report, pointing to machine efficiency in the preparation of first drafts and the successful rollout of a few machine translation systems.

In the 1960s and 1970s, there were only a few machine translation research groups.

The TAUM group in Canada, the Mel'cuk and Apresian groups in the Soviet Union, the GETA group in France, and the German Saarbrücken SUSY group were among the biggest.

SYSTRAN (System Translation), a private corporation financed by government contracts founded by Hungarian-born linguist and computer scientist Peter Toma, was the main supplier of automated translation technology and services in the United States.

In the 1950s, Toma became interested in machine translation while studying at the California Institute of Technology.

Around 1960, Toma moved to Georgetown University and started collaborating with other machine translation experts.

The Georgetown machine translation project, as well as SYSTRAN's initial contract with the United States Air Force in 1969, were both devoted to translating Russian into English.

That same year, at Wright-Patterson Air Force Base, the company's first machine translation programs were tested.

SYSTRAN software was used by the National Aeronautics and Space Administration (NASA) as a translation help during the Apollo-Soyuz Test Project in 1974 and 1975.

Shortly after, SYSTRAN was awarded a contract by the Commission of the European Communities to offer automated translation services, and the company has subsequently amalgamated with the European Commission (EC).

By the 1990s, the EC had seventeen different machine translation systems focused on different language pairs in use for internal communications.

In 1992, SYSTRAN began migrating its mainframe software to personal computers.

SYSTRAN Professional Premium for Windows was launched in 1995 by the company.

SYSTRAN continues to be the industry leader in machine translation.

METEO, which has been in use by the Canadian Meteorological Center in Montreal since 1977 for the purpose of translating weather bulletins from English to French; ALPS, developed by Brigham Young University for Bible translation; SPANAM, the Pan American Health Organization's Spanish-to-English automatic translation system; and METAL, developed at the University of Toronto.

In the late 1990s, machine translation became more readily accessible to the general public through web browsers.

Babel Fish, a web-based application created by a group of researchers at Digital Equipment Corporation using SYSTRAN machine translation technology, was one of the earliest online language translation services (DEC).

Thirty-six translation pairs between thirteen languages were supported by the technology.

Babel Fish began as an AltaVista web search engine tool before being sold to Yahoo! and then Microsoft.

The majority of online translation services still use rule-based and statistical machine translation.

Around 2016, SYSTRAN, Microsoft Translator, and Google Translate made the switch to neural machine translation.

103 languages are supported by Google Translate.

Predictive deep learning algorithms, artificial neural networks, or connectionist systems based after biological brains are used in neural machine translation.

Machine translation based on neural networks is achieved in two steps.

The translation engine models its interpretation in the first phase based on the context of each source word within the entire sentence.

The artificial neural network then translates the entire word model into the target language in the second phase.

Simply said, the engine predicts the probability of word sequences and combinations inside whole sentences, resulting in a fully integrated translation model.

The underlying algorithms use statistical models to learn language rules.

The Harvard SEAS natural language processing group, in collaboration with SYSTRAN, has launched OpenNMT, an open-source neural machine translation system.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Cheng, Lili; Natural Language Processing and Speech Understanding.



Further Reading:


Arnold, Doug J., Lorna Balkan, R. Lee Humphreys, Seity Meijer, and Louisa Sadler. 1994. Machine Translation: An Introductory Guide. Manchester and Oxford: NCC Blackwell.

Bar-Hillel, Yehoshua. 1960. “The Present Status of Automatic Translation of Languages.” Advances in Computers 1: 91–163.

Garvin, Paul L. 1967. “Machine Translation: Fact or Fancy?” Datamation 13, no. 4: 29–31.

Hutchins, W. John, ed. 2000. Early Years in Machine Translation: Memoirs and Biographies of Pioneers. Philadelphia: John Benjamins.

Locke, William Nash, and Andrew Donald Booth, eds. 1955. Machine Translation of Languages. New York: Wiley.

Yngve, Victor H. 1964. “Implications of Mechanical Translation Research.” Proceedings of the American Philosophical Society 108 (August): 275–81.



Artificial Intelligence - What Is The Mac Hack IV Program?

 




Mac Hack IV, a 1967 chess software built by Richard Greenblatt, gained notoriety for being the first computer chess program to engage in a chess tournament and to play adequately against humans, obtaining a USCF rating of 1,400 to 1,500.

Greenblatt's software, written in the macro assembly language MIDAS, operated on a DEC PDP-6 computer with a clock speed of 200 kilohertz.

While a graduate student at MIT's Artificial Intelligence Laboratory, he built the software as part of Project MAC.

"Chess is the drosophila [fruit fly] of artificial intelligence," according to Russian mathematician Alexander Kronrod, the field's chosen experimental organ ism (Quoted in McCarthy 1990, 227).



Creating a champion chess software has been a cherished goal in artificial intelligence since 1950, when Claude Shan ley first described chess play as a task for computer programmers.

Chess and games in general involve difficult but well-defined issues with well-defined rules and objectives.

Chess has long been seen as a prime illustration of human-like intelligence.

Chess is a well-defined example of human decision-making in which movements must be chosen with a specific purpose in mind, with limited knowledge and uncertainty about the result.

The processing capability of computers in the mid-1960s severely restricted the depth to which a chess move and its alternative answers could be studied since the number of different configurations rises exponentially with each consecutive reply.

The greatest human players have been proven to examine a small number of moves in greater detail rather than a large number of moves in lower depth.

Greenblatt aimed to recreate the methods used by good players to locate significant game tree branches.

He created Mac Hack to reduce the number of nodes analyzed while choosing moves by using a minimax search of the game tree along with alpha-beta pruning and heuristic components.

In this regard, Mac Hack's style of play was more human-like than that of more current chess computers (such as Deep Thought and Deep Blue), which use the sheer force of high processing rates to study tens of millions of branches of the game tree before making moves.

In a contest hosted by MIT mathematician Seymour Papert in 1967, Mac Hack defeated MIT philosopher Hubert Dreyfus and gained substantial renown among artificial intelligence researchers.

The RAND Corporation published a mimeographed version of Dreyfus's paper, Alchemy and Artificial Intelligence, in 1965, which criticized artificial intelligence researchers' claims and aspirations.

Dreyfus claimed that no computer could ever acquire intelligence since human reason and intelligence are not totally rule-bound, and hence a computer's data processing could not imitate or represent human cognition.

In a part of the paper titled "Signs of Stagnation," Dreyfus highlighted attempts to construct chess-playing computers, among his many critiques of AI.

Mac Hack's victory against Dreyfus was first seen as vindication by the AI community.



Jai Krishna Ponnappan


You may also want to read more about Artificial Intelligence here.



See also: 


Alchemy and Artificial Intelligence; Deep Blue.



Further Reading:



Crevier, Daniel. 1993. AI: The Tumultuous History of the Search for Artificial Intelligence. New York: Basic Books.

Greenblatt, Richard D., Donald E. Eastlake III, and Stephen D. Crocker. 1967. “The Greenblatt Chess Program.” In AFIPS ’67: Proceedings of the November 14–16, 1967, Fall Joint Computer Conference, 801–10. Washington, DC: Thomson Book Company.

Marsland, T. Anthony. 1990. “A Short History of Computer Chess.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 3–7. New York: Springer-Verlag.

McCarthy, John. 1990. “Chess as the Drosophila of AI.” In Computers, Chess, and Cognition, edited by T. Anthony Marsland and Jonathan Schaeffer, 227–37. New York: Springer-Verlag.

McCorduck, Pamela. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.




Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...