What Is The SSLV Rocket?



    What Is SSLV?

    The Small Satellite Launch Vehicle (SSLV) is an ISRO-developed small-lift launch vehicle with a payload capacity of 500 kg (1,100 lb) to low Earth orbit (500 km (310 mi)) or 300 kg (660 lb) to Sun-synchronous orbit (500 km (310 mi)) for launching small satellites, as well as the ability to support multiple orbital drop-offs. 




    SSLV is designed with low cost and quick turnaround in mind, with launch-on-demand flexibility and minimum infrastructure needs. 

    The SSLV-D1 launched from the First Launch Pad on August 7, 2022, but failed to reach orbit. 

    SSLV launches to Sun-synchronous orbit will be handled in the future by the SSLV Launch Complex (SLC) at Kulasekharapatnam in Tamil Nadu




    After entering the operational phase, the vehicle's manufacture and launch operations would be handled by an Indian consortium led by NewSpace India Limited (NSIL). 


    What Is The Origin And Evolution Of SSLV?



    The SSLV was created with the goal of commercially launching small satellites at a far lower cost and with a greater launch rate than the Polar Satellite Launch Vehicle (PSLV)

    SSLV has a development cost of 169.07 crore (US$21 million) and a production cost of 30 crore (US$3.8 million) to 35 crore (US$4.4 million). 

    The expected high launch rate is based on mostly autonomous launch operations and simplified logistics in general. 

    In comparison, a PSLV launch employs 600 officials, but SSLV launch procedures are overseen by a tiny crew of about six persons. 



    The SSLV's launch preparation phase is predicted to be less than a week rather than months. 



    The launch vehicle may be erected vertically, similar to the current PSLV and Geosynchronous Satellite Launch Vehicle (GSLV), or horizontally, similar to the decommissioned Satellite Launch Vehicle (SLV) and Augmented Satellite Launch Vehicle (ASLV)


    The vehicle's initial three stages employ HTPB-based solid propellant, with a fourth terminal stage consisting of a Velocity-Trimming Module (VTM) with eight 50 N reaction control thrusters and eight 50 N axial thrusters for altering velocity. 


    SSLV's first and third stages (SS1) are novel, while the second stage (SS2) is derived from PSLV's third stage (HPS3). 



    Where Is The SSLV Launch Complex?



    Early developmental flights and those to inclined orbits would launch from Sriharikota, first from existing launch pads and ultimately from a new facility in Kulasekharapatnam known as the SSLV Launch Complex (SLC). 

    In October 2019, tenders for production, installation, assembly, inspection, testing, and Self Propelled Launching Unit (SPU) were announced. 

    When completed, this proposed spaceport at Kulasekharapatnam in Tamil Nadu would handle SSLV launches to Sun-synchronous orbit. 


    What Is The History Of The SSLV?

    Rajaram Nagappa recommended the development route of a 'Small Satellite Launch Vehicle-1' to launch strategic payloads in a National Institute of Advanced Studies paper in 2016. 



    S. Somanath, then-Director of Liquid Propulsion Systems Centre, acknowledged a need for identifying a cost-effective launch vehicle configuration with 500 kg payload capacity to LEO at the National Space Science Symposium in 2016, and development of such a launch vehicle was underway by November 2017. 



    The vehicle design was completed by the Vikram Sarabhai Space Centre (VSSC) in December 2018. 

    All booster segments for the SSLV first stage (SS1) static test (ST01) were received in December 2020 and assembled in the Second Vehicle Assembly Building (SVAB). 

    On March 18, 2021, the SS1 first-stage booster failed its first static fire test (ST01). 

    Oscillations were detected about 60 seconds into the test, and the nozzle of the SS1 stage disintegrated after 95 seconds. 

    The test was supposed to last 110 seconds. 

    SSLV's solid first stage SS1 must pass two consecutive nominal static fire tests in order to fly. 

    In August 2021, the SSLV Payload Fairing (SPLF) functional certification test was completed. 

    On 14 March 2022, the second static fire test of SSLV first stage SS1 was performed at SDSC-SHAR and satisfied the specified test goals. 


    How Will The Small Satellite Launch Vehicle (SSLV) Be Manufactured?

    ISRO has begun development of a Small Satellite Launch Vehicle to serve the burgeoning global small satellite launch service industry. 

    NSIL would be responsible for manufacturing SSLV via Indian industry partners. 

     

    What Are The Unique Features Of The Small Satellite Launch Vehicle (SSLV)?

    SSLV has been intended to suit "Launch on Demand" criteria while being cost-effective. 

    It is a three-stage all-solid vehicle capable of launching up to 500 kilograms satellites into 500 km LEO. 

    What Are The Expected Benefits Of The SSLV Rocket?

    Reduced Turn-around Time Launch on Demand Cost Optimization.

    Realization and Operation Ability to accommodate several satellites.

    Minimum infrastructure required for launch Design practices that have stood the test of time.

    The first flight from SDSC SHAR was originally scheduled during the fourth quarter of 2019. It occurred only in August of 2022.

    Following the first developmental flights, ISRO plans to produce SSLV via Indian Industries through its commercial arm, NSIL. 


    What Is The Operational Performance History Of The SSLV?


    The SSLV's maiden developmental flight was place on August 7, 2022. 

    SSLV-D1 was the name of the flying mission. 

    The SSLV-D1 flight's mission goals were not met. 

    The rocket featured three stages and a fourth Velocity Trimming Module (VTM). 

    The rocket stood 34m tall, with a diameter of 2m, and a lift-off mass of 120t in its D1 version. 

    The rocket launched EOS 02, a 135 kilograms Earth observation satellite, and AzaadiSAT, an 8 kg CubeSat payload designed by Indian students to promote inclusion in STEM education. 


    The SSLV-D1 was planned to deploy the two satellite payloads in a circular orbit with a height of 356.2 km and an inclination of 37.2°. 

    The ISRO's stated reason for the mission's failure was software failure. 

    The mission software identified an accelerometer anomaly during the second stage separation, according to the ISRO. 

    As a result, the rocket navigation switched from closed loop to open loop guidance. 

    Despite the fact that this change in guiding mode was part of the redundancy incorporated into the rocket's navigation, it was not enough to save the mission. 

    During open loop guiding mode, the last VTM stage only fired for 0.1s rather than the required 20s. 

    As a result, the two satellites and the rocket's VTM stage were injected into an unstable elliptical 35676 km orbit. 

    The SSLV-final D1's VTM stage had 16 hydrazine-fueled (MMH+MON3) thrusters. 

    Eight of them were to regulate the orbital velocity and the other eight were to control the altitude. 

    During the orbital insertion maneuvers, the VTM stage also controlled pitch, yaw, and roll. 

    The SSLV-three D1's major stages all worked well. 

    However, this was insufficient to provide enough thrust for the two satellite payloads to establish stable orbits. 

    The VTM stage required to burn for at least 20 seconds to impart enough extra orbital velocity and altitude adjustments to put the two satellite payloads into their designated stable orbits. 

    Instead, the VTM activated at 653.5s and shut down at 653.6s after lift-off. 

    After the VTM stage was partially fired, the EOS 02 was released at 738.5s and AazadiSAT at 788.4s after liftoff. 

    These failures occurred, causing the satellites to reach an unstable orbit and then be destroyed upon reentry. 



    What Was The Performance Outcome Of The SSLV D1 Mission?

    SSLV's maiden developmental flight. 

    The mission goal was a circular orbit of 356.2 km height and 37.2° inclination. 

    Two satellite payloads were carried on the trip. 


    1. The 135-kilogram EOS-02 Earth observation satellite 
    2. and the 8-kilogram AzaadiSAT CubeSat. 


    Due to sensor failure and flaws in onboard software, the stage and two satellite payloads were put into an unstable elliptical orbit of 35676 km and then destroyed upon reentry. 

    The mission software, according to the ISRO, failed to detect and rectify a sensor malfunction in the VTM stage. 

    The last VTM stage only fired momentarily (0.1s). 


    What Were The Overall Lessons From The SSLV-D1/EOS-02 Mission?



    Mission ISRO developed a small satellite launch vehicle (SSLV) to launch up to 500 kilograms satellites into Low Earth Orbits on a 'launch-on-demand' basis . 


    The SSLV-D1/EOS-02 Mission's first developmental flight was slated for August 7, 2022, at 09:18 a.m. 

    (IST) from the Satish Dhawan Space Centre's First Launch Pad in Sriharikota. 

    The SSLV-D1 mission would send EOS-02, a 135 kilograms satellite, into a low-Earth orbit 350 kilometers above the equator at an inclination of roughly 37 degrees. 

    The mission also transports the AzaadiSAT satellite. 

    SSLV is built with three solid stages weighing 87 t, 7.7 t, and 4.5 t. 

    The satellite is inserted into the desired orbit using a liquid propulsion-based velocity trimming module. 

    • SSLV is capable of launching Mini, Micro, or Nanosatellites (weighing between 10 and 500 kg) into a 500 km planar orbit. 
    • SSLV gives low-cost on-demand access to space. 
    • It has a quick turnaround time, the ability to accommodate numerous satellites, the ability to launch on demand, minimum launch infrastructure needs, and so on. 



    SSLV-D1 is a 34-meter-tall, 2-meter-diameter vehicle with a lift-off mass of 120 tonnes. 

    ISRO developed and built the EOS-02 earth observation satellite. 



    This microsat class satellite provides superior optical remote sensing with excellent spatial resolution in the infrared spectrum. 

    The bus configuration is based on the IMS-1 bus. 

    AzaadiSAT is an 8U Cubesat that weighs around 8 kg. 

    It transports 75 distinct payloads, each weighing roughly 50 grams and performing femto-experiments. 

    These payloads were built with the help of female students from rural areas around the nation. 

    The payloads were assembled by the "Space Kidz India" student team. 

    A UHF-VHF Transponder operating on ham radio frequency to allow amateur radio operators to transmit speech and data, a solid state PIN diode-based Radiation counter to detect the ionizing radiation in its orbit, a long-range transponder, and a selfie camera are among the payloads. 

    The data from this satellite was planned to be received using the ground system built by 'Space Kidz India.'  

    Both satellite missions have failed as a result of the failure of SSLV-D1's terminal stage.



    When Is The SSLV D2 Planned To Lift Off?

    The SSLV's second developmental flight is planned for November of 2022. 

    It is intended to transport four Blacksky Global satellites weighing 56 kg to a 500 km circular orbit with a 50° inclination.  

    It will place the X-ray polarimeter satellite into low Earth orbit(LEO).


    ~ Jai Krishna Ponnappan.


    AI Glossary - What Is The ARTMAP-IC?

       


      What Is The ARTMAP-IC Algorithm?

      The fundamental fuzzy ARTMAP is enhanced by this network with distributed prediction and category instance counting.


      How Is The ARTMAP-IC Used For Medical Diagnosis?

      Medical diagnosis with ARTMAP-IC: Inconsistent cases and instance counting. 



      The ARTMAP-IC neural network extends the fundamental fuzzy ARTMAP system with distributed prediction and category instance counting for challenging database prediction issues like medical diagnosis. 

      A new version of the ARTMAP match tracking algorithm, which governs search after a predictive mistake, makes prediction with sparse or inconsistent data easier. 

      The new approach (MT-) significantly compresses memory without sacrificing speed while improving the accuracy of the real-time network differential equations as compared to the old match tracking algorithm (MT+). 

      Simulated analyses of four medical databases—Pima Indian diabetes, breast cancer, heart disease, and gallbladder removal—examine the prognostic accuracy of these conditions. 



      Results using logistic regression, K closest neighbor (KNN), the ADAP preceptron, multisurface pattern separation, CLASSIT, instance-based (IBL), and C4 are comparable to or superior to those from ARTMAP-IC. 

      The dynamics of ARTMAP are quick, reliable, and scalable. 



      By repeatedly training the system on various input set orderings, a voting technique enhances prediction. 

      Confidence intervals for competing predictions are derived from voting, instance counting, and distributed representations.


      HOW DOES ARTMAP-IC NEURAL NETWORK CLASSIFIER FUNCTION?

      In an ART-based network, information reverberates between the network’s layers. 

      Learning is possible in the network, when resonance of the neuronal activity occurs. ART1 was developed to perform clustering on binary-valued patterns. 

      By interconnecting two ART1 modules, ARTMAP was the first ART-based architecture suited for classification tasks. 

      ARTMAP- IC adds to the basic ARTMAP system new capabilities designed to solve the problem with inconsistent cases, which arises in prediction, where similar input vectors correspond to cases with different outcomes, (Carpenter, Grossberg, and Reynolds, 1991), (Carpenter and Markuzon, 1998). 

      It modifies the ARTMAP search algorithm to allow the network to encode inconsistent cases (IC). 

      Below figure, adapted from (Carpenter and Markuzon, 1998), shows the architecture of an ARTMAP-IC network. 


      Simplified ARTMAP-IC Architecture


      It consist of fully connected layers of nodes: an M-node input layer F1, an Nnode competitive layer F2, an N-node instance counting layer F3, an L-node output layer F0 b , and an L-node map field Fab that links F3 and F0 b . 

      In ARTMAP-IC an input a=(a1, a2, … , aM) learns to predict an outcome b=(b1, b2, …, bL), , where only one component bK=1, placing the input a in class K. 

      With fast learning, β=1, ARTMAP-IC represents category K as hyper-rectangle ℜK that just encloses all the training set patterns a to which it has been assigned. 

      A set of real weights W={wji: j=1,…,N; i=1,…,M} is associated with the F1 - F2 layer connections. Each F2 node j represents a category in the input space, and stores a prototype vector wj=(wj1, wj2, …,wjM). 

      The F2 layer is connected, through associative links to F3, which in turn is connected to the map field Fab by associative links with binary weights Wab=(wjk ab:j=1,…,N; k=1,…,L}. 

      The vector wj ab=(wj1 ab, wj2 ab, …,wjL ab) relates F2 node j to one of the L output classes. Instance counting biases distributed predictions according to the number of training set inputs classified by each F2 node. 

      During testing the F2->F3 input yj is multiplied by the counting weight cj to produce normalized F3 activity, which projects to the map field Fab for prediction. 


      How Does The ARTMAP-IC Algorithm Operate In Classifier Mode?

      The following algorithm describes the operation of an ARTMAP-IC classifier in learning mode: 


      1. Initialization: 

      Initially, all the neurons of F2 are uncommitted, all weight values wji are initialized to 1, and all weight values wjk of Fab are set to 0. 


      2. Input pattern coding: 

      When a training pair (a,b) is presented to the network, a undergoes preprocessing, and yields pattern A=(A1,A2,…,A2M). 

      The vigilance parameter ρ is reset to its baseline value. 


      3. Prototype selection: 

      Pattern A activates layer F1 and is propagated through weighted connections W to layer F2. 

      Activation of each node j in the F2 layer is determined by the choice function Tj(A)=|A∧wj|/(α+|wj|). 

      The F2 layer produces a winner-take-all pattern of activity y=(y1,y2,…,yN) such that only node j=J with the greatest activation value remains active (yJ=1). 

      Node J propagates its prototype vector wJ back onto F1 and the vigilance test |A∧wj|≥ρM is performed. 

      This test compares the degree of match between wJ and A to the vigilance parameter ρ∈[0,1]. 

      If this test is satisfied, node J remains active and resonance is said to occur. 

      Otherwise, the network inhibits the active F2 node and searches for another node J that passes the vigilance test. 

      If such a node does not exist, an uncommitted F2 node becomes active and undergoes learning (step 5). 


      4. Class prediction: 

      Pattern b is fed directly to the map field Fab, while the F2 activity pattern y is propagated to the map field via associative connections Wab. 

      The latter input activates Fab nodes according to the prediction function ∑= = N j ab j jk ab Sk y y w 1 ( ) and the most active Fab node K yields the class prediction (K=k(J)). 

      If node K constitutes an incorrect class prediction, a match tracking signal raises vigilance just enough to induce another search among F2 nodes (step 3). 

      This search continues until either an uncommitted F2 node becomes active (learning ensues at step 5), or a node J that has  previously learned the correct class prediction K becomes active. 

      5. Learning: 

      Learning input a involves updating prototype vector wJ, and if J corresponds to a newly committed node, creating a permanent associative link to Fab. 

      A new association between F2 node J and Fab node K (K=k(J)) is learned by setting wJk ab=1 for k=K, where K is the target class label for a. 

      Once the weights (W and Wab) have converged for the training set patterns, ARTMAP can predict a class label for an input pattern by performing steps 2, 3 and 4 without any testing. 

      A pattern a that activates node J is predicted to belong to the class K=k(J)




      ~ Jai Krishna Ponnappan

      Find Jai on Twitter | LinkedIn | Instagram


      Be sure to refer to the complete & active AI Terms Glossary here.

      You may also want to read more about Artificial Intelligence here.


      Reference And Further Reading


      • Tayyebi, S. and Soltanali, S., Fuzzy Modeling System Based on Ga Fuzzy Rule Extraction and Hybrid of Differential Evolution and Tabu Search Approaches: Application in Synthesis Gas Conversion to Valuable Hydrocarbons Process. Saeed, Fuzzy Modeling System Based on Ga Fuzzy Rule Extraction and Hybrid of Differential Evolution and Tabu Search Approaches: Application in Synthesis Gas Conversion to Valuable Hydrocarbons Process.
      • Tang, Y., Qiu, J. and Gao, M., 2022. Fuzzy Medical Computer Vision Image Restoration and Visual Application. Computational and Mathematical Methods in Medicine2022.
      • Dymora, P., Mazurek, M. and Bomba, S., 2022. A Comparative Analysis of Selected Predictive Algorithms in Control of Machine Processes. Energies 2022, 15, 1895.
      • Naosekpam, V. and Sahu, N., 2022, April. IFVSNet: Intermediate Features Fusion based CNN for Video Subtitles Identification. In 2022 IEEE 7th International conference for Convergence in Technology (I2CT) (pp. 1-6). IEEE.
      • Boga, J. and Kumar, V.D., 2022. Human activity recognition by wireless body area networks through multi‐objective feature selection with deep learning. Expert Systems, p.e12988.
      • Župerl, U., Stepien, K., Munđar, G. and Kovačič, M., 2022. A Cloud-Based System for the Optical Monitoring of Tool Conditions during Milling through the Detection of Chip Surface Size and Identification of Cutting Force Trends. Processes10(4), p.671.
      • Neto, J.B.C., Ferrari, C., Marana, A.N., Berretti, S. and Bimbo, A.D., 2022. Learning Streamed Attention Network from Descriptor Images for Cross-resolution 3D Face Recognition. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM).
      • Chattopadhyay, S., Dey, A., Singh, P.K., Ahmadian, A. and Sarkar, R., 2022. A feature selection model for speech emotion recognition using clustering-based population generation with hybrid of equilibrium optimizer and atom search optimization algorithm. Multimedia Tools and Applications, pp.1-34.
      • Kanagaraj, R., Elakiya, E., Rajkumar, N., Srinivasan, K. and Sriram, S., 2022, January. Fuzzy Neural Network Classification Model for Multi Labeled Electricity Consumption Data Set. In 2022 4th International Conference on Smart Systems and Inventive Technology (ICSSIT) (pp. 1037-1041). IEEE.




      AI Glossary - What Is ARTMAP?


         


        What Is ARTMAP AI Algorithm?



        The supervised learning variant of the ART-1 model is ARTMAP.

        It learns binary input patterns that are given to it.


        The suffix "MAP" is used in the names of numerous supervised ART algorithms, such as Fuzzy ARTMAP.

        Both the inputs and the targets are clustered in these algorithms, and the two sets of clusters are linked.


        The ARTMAP algorithms' fundamental flaw is that they lack a way to prevent overfitting, hence they should not be utilized with noisy data.


        How Does The ARTMAP Neural Network Work?



        A novel neural network architecture called ARTMAP automatically picks out recognition categories for any numbers of arbitrarily ordered vectors depending on the accuracy of predictions. 

        A pair of Adaptive Resonance Theory modules (ARTa and ARTb) that may self-organize stable recognition categories in response to random input pattern sequences make up this supervised learning system. 

        The ARTa module gets a stream of input patterns ([a(p)]) and the ARTb module receives a stream of input patterns ([b(p)]), where b(p) is the right prediction given a (p). 

        An internal controller and an associative learning network connect these ART components to provide real-time autonomous system functioning. 

        The remaining patterns a(p) are shown during test trials without b(p), and their predictions at ARTb are contrasted with b. (p). 



        The ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms when tested on a benchmark machine learning database in both on-line and off-line simulations, and achieves 100% accuracy after training on less than half the input patterns in the database. 


        It accomplishes these features by using an internal controller that, on a trial-by-trial basis, links predictive success to category size and simultaneously optimizes predictive generalization and reduces predictive error, using only local operations. 

        By the smallest amount required to rectify a predicted inaccuracy at ARTb, this calculation raises the alertness parameter an of ARTa. 

        To accept a category or hypothesis triggered by an input a(p), rather than seeking a better one via an autonomously controlled process of hypothesis testing, ARTa must have a minimal level of confidence, which is calibrated by the parameter a. 

        The degree of agreement between parameter a and the top-down learnt expectation, or prototype, which is read out after activating an ARTa category, is compared. 

        If the degree of match is less than a, search is initiated. 


        The self-organizing expert system known as ARTMAP adjusts the selectivity of its hypotheses depending on the accuracy of its predictions. 

        As a result, even if they are identical to frequent occurrences with distinct outcomes, unusual but significant events may be promptly and clearly differentiated. 

        In the intervals between input trials, a returns to baseline alertness. 

        When is big, the system operates in a cautious mode and only makes predictions when it is certain of the result. 

        At no step of learning, therefore, do many false-alarm mistakes happen, yet the system nonetheless achieves asymptote quickly. 

        Due to the self-stabilizing nature of ARTMAP learning, it may continue to learn one or more databases without deteriorating its corpus of memories until all available memory has been used.


        What Is Fuzzy ARTMAP?



        For incremental supervised learning of recognition categories and multidimensional maps in response to arbitrary sequences of analogue or binary input vectors, which may represent fuzzily or crisply defined sets of characteristics, a neural network architecture is developed. 

        By taking advantage of a close formal resemblance between the computations of fuzzy subsethood and ART category choosing, resonance, and learning, the architecture, dubbed fuzzy ARTMAP, accomplishes a synthesis of fuzzy logic and adaptive resonance theory (ART) neural networks. 



        In comparison to benchmark backpropagation and general algorithm systems, fuzzy ARTMAP performance was shown using four simulation classes. 



        A letter recognition database, learning to distinguish between two spirals, identifying locations inside and outside of a circle, and incremental approximation of a piecewise-continuous function are some of the simulations included in this list. 

        Additionally, the fuzzy ARTMAP system is contrasted with Simpson's FMMC system and Salzberg's NGE systems.



        ~ Jai Krishna Ponnappan

        Find Jai on Twitter | LinkedIn | Instagram



        References And Further Reading:


        • Moreira-Júnior, J.R., Abreu, T., Minussi, C.R. and Lopes, M.L., 2022. Using Aggregated Electrical Loads for the Multinodal Load Forecasting. Journal of Control, Automation and Electrical Systems, pp.1-9.
        • Ferreira, W.D.A.P., Grout, I. and da Silva, A.C.R., 2022, March. Application of a Fuzzy ARTMAP Neural Network for Indoor Air Quality Prediction. In 2022 International Electrical Engineering Congress (iEECON) (pp. 1-4). IEEE.
        • La Marca, A.F., Lopes, R.D.S., Lotufo, A.D.P., Bartholomeu, D.C. and Minussi, C.R., 2022. BepFAMN: A Method for Linear B-Cell Epitope Predictions Based on Fuzzy-ARTMAP Artificial Neural Network. Sensors22(11), p.4027.
        • Santos-Junior, C.R., Abreu, T., Lopes, M.L. and Lotufo, A.D., 2021. A new approach to online training for the Fuzzy ARTMAP artificial neural network. Applied Soft Computing113, p.107936.
        • Ferreira, W.D.A.P., 2021. Rede neural ARTMAP fuzzy implementada em hardware aplicada na previsão da qualidade do ar em ambiente interno.









        Artificial Intelligence - Who Is Sherry Turkle?

         


         

         

        Sherry Turkle(1948–) has a background in sociology and psychology, and her work focuses on the human-technology interaction.

        While her study in the 1980s focused on how technology affects people's thinking, her work in the 2000s has become more critical of how technology is utilized at the expense of building and maintaining meaningful interpersonal connections.



        She has employed artificial intelligence in products like children's toys and pets for the elderly to highlight what people lose out on when interacting with such things.


        Turkle has been at the vanguard of AI breakthroughs as a professor at the Massachusetts Institute of Technology (MIT) and the creator of the MIT Initiative on Technology and the Self.

        She highlights a conceptual change in the understanding of AI that occurs between the 1960s and 1980s in Life on the Screen: Identity inthe Age of the Internet (1995), substantially changing the way humans connect to and interact with AI.



        She claims that early AI paradigms depended on extensive preprogramming and employed a rule-based concept of intelligence.


        However, this viewpoint has given place to one that considers intelligence to be emergent.

        This emergent paradigm, which became the recognized mainstream view by 1990, claims that AI arises from a much simpler set of learning algorithms.

        The emergent method, according to Turkle, aims to emulate the way the human brain functions, assisting in the breaking down of barriers between computers and nature, and more generally between the natural and the artificial.

        In summary, an emergent approach to AI allows people to connect to the technology more easily, even thinking of AI-based programs and gadgets as children.



        Not just for the area of AI, but also for Turkle's study and writing on the subject, the rising acceptance of the emerging paradigm of AI and the enhanced relatability it heralds represents a significant turning point.


        Turkle started to employ ethnographic research techniques to study the relationship between humans and their gadgets in two edited collections, Evocative Objects: Things We Think With (2007) and The Inner History of Devices (2008).


        She emphasized in her book The Inner History of Devices that her intimate ethnography, or the ability to "listen with a third ear," is required to go past the advertising-based clichés that are often employed when addressing technology.


        This method comprises setting up time for silent meditation so that participants may think thoroughly about their interactions with their equipment.


        Turkle used similar intimate ethnographic approaches in her second major book, Alone Together

        Why We Expect More from Technology and Less from Each Other (2011), to argue that the increasing connection between people and the technology they use is harmful.

        These issues are connected to the increased usage of social media as a form of communication, as well as the continuous degree of familiarity and relatability with technology gadgets, which stems from the emerging AI paradigm that has become practically omnipresent.

        She traced the origins of the dilemma back to early pioneers in the field of cybernetics, citing, for example, Norbert Weiner's speculations on the idea of transmitting a human person across a telegraph line in his book God & Golem, Inc.(1964).

        Because it reduces both people and technology to information, this approach to cybernetic thinking blurs the barriers between them.



        In terms of AI, this implies that it doesn't matter whether the machines with which we interact are really intelligent.


        Turkle claims that by engaging with and caring for these technologies, we may deceive ourselves into feeling we are in a relationship, causing us to treat them as if they were sentient.

        In a 2006 presentation titled "Artificial Intelligence at 50: From Building Intelligence to Nurturing Sociabilities" at the Dartmouth Artificial Intelligence Conference, she recognized this trend.

        She identified the 1997 Tamagotchi, 1998 Furby, and 2000 MyReal Baby as early versions of what she refers to as relational artifacts, which are more broadly referred to as social machines in the literature.

        The main difference between these devices and previous children's toys is that these devices come pre-animated and ready for a relationship, whereas previous children's toys required children to project a relationship onto them.

        Turkle argues that this change is about our human weaknesses as much as it is about computer capabilities.

        In other words, just caring for an item increases the likelihood of not only seeing it as intelligent but also feeling a connection to it.

        This sense of connection is more relevant to the typical person engaging with these technologies than abstract philosophic considerations concerning the nature of their intelligence.



        Turkle delves more into the ramifications of people engaging with AI-based technologies in both Alone Together and Reclaiming Conversation: The Power of Talk in a Digital Age (2015).


        She provides the example of Adam in Alone Together, who appreciates the appreciation of the AI bots he controls over in the game Civilization.

        Adam appreciates the fact that he is able to create something fresh when playing.

        Turkle, on the other hand, is skeptical of this interaction, stating that Adam's playing isn't actual creation, but rather the sensation of creation, and that it's problematic since it lacks meaningful pressure or danger.

        In Reclaiming Conversation, she expands on this point, suggesting that social partners simply provide a perception of camaraderie.

        This is important because of the value of human connection and what may be lost in relationships that simply provide a sensation or perception of friendship rather than true friendship.

        Turkle believes that this transition is critical.


        She claims that although connections with AI-enabledtechnologies may have certain advantages, they pale in contrast to what is missing: 

        • the complete complexity and inherent contradictions that define what it is to be human.


        A person's connection with an AI-enabled technology is not as intricate as one's interaction with other individuals.


        Turkle claims that as individuals have become more used to and dependent on technology gadgets, the definition of friendship has evolved.


        • People's expectations for companionship have been simplified as a result of this transformation, and the advantages that one wants to obtain from partnerships have been reduced.
        • People now tend to associate friendship only with the concept of interaction, ignoring the more nuanced sentiments and arguments that are typical in partnerships.
        • By engaging with gadgets, one may form a relationship with them.
        • Conversations between humans have become merely transactional as human communication has shifted away from face-to-face conversation and toward interaction mediated by devices. 

        In other words, the most that can be anticipated is engagement.



        Turkle, who has a background in psychoanalysis, claims that this kind of transactional communication allows users to spend less time learning to view the world through the eyes of another person, which is a crucial ability for empathy.


        Turkle argues we are in a robotic period in which people yearn for, and in some circumstances prefer, AI-based robotic companionship over that of other humans, drawing together these numerous streams of argument.

        For example, some people enjoy conversing with their iPhone's Siri virtual assistant because they aren't afraid of being judged by it, as evidenced by a series of Siri commercials featuring celebrities talking to their phones.

        Turkle has a problem with this because these devices can only respond as if they understand what is being said.


        AI-based gadgets, on the other hand, are confined to comprehending the literal meanings of data stored on the device.

        They can decipher the contents of phone calendars and emails, but they have no idea what any of this data means to the user.

        There is no discernible difference between a calendar appointment for car maintenance and one for chemotherapy for an AI-based device.

        A person may lose sight of what it is to have an authentic dialogue with another human when entangled in a variety of these robotic connections with a growing number of technologies.


        While Reclaiming Communication documents deteriorating conversation skills and decreasing empathy, it ultimately ends on a positive note.

        Because people are becoming increasingly dissatisfied with their relationships, there may be a chance for face-to-face human communication to reclaim its vital role.


        Turkle's ideas focus on reducing the amount of time people spend on their phones, but AI's involvement in this interaction is equally critical.


        • Users must accept that their virtual assistant connections will never be able to replace face-to-face interactions.
        • This will necessitate being more deliberate in how one uses devices, prioritizing in-person interactions over the faster and easier interactions provided by AI-enabled devices.


        ~ Jai Krishna Ponnappan

        Find Jai on Twitter | LinkedIn | Instagram


        You may also want to read more about Artificial Intelligence here.



        See also: 

        Blade Runner; Chatbots and Loebner Prize; ELIZA; General and Narrow AI; Moral Turing Test; PARRY; Turing, Alan; 2001: A Space Odyssey.


        References And Further Reading

        • Haugeland, John. 1997. “What Is Mind Design?” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 1–28. Cambridge, MA: MIT Press.
        • Searle, John R. 1997. “Minds, Brains, and Programs.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 183–204. Cambridge, MA: MIT Press.
        • Turing, A. M. 1997. “Computing Machinery and Intelligence.” Mind Design II: Philosophy, Psychology, Artificial Intelligence, edited by John Haugeland, 29–56. Cambridge, MA: MIT Press.



        Analog Space Missions: Earth-Bound Training for Cosmic Exploration

        What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...