Program and System Information Protocol (disingkat PSIP) adalah protokol yang digunakan dalam sistem televisi digital ATSC untuk membawa metadata tentang setiap saluran televisi dalam aliran transportasi MPEGdari stasiun TV dan mempublikasikan informasi tentang program televisi sehingga pemirsa dapat memilih judul program apa dan deskripsi diikuti oleh program.
The Program and System Information Protocol (PSIP) is the MPEG (a video and audio industry group) and privately defined program-specific information originally defined by General Instrument for the DigiCipher 2 system and later extended for the ATSC digital television system for carrying metadata about each channel in the broadcast MPEG transport stream of a television station and for ...
People also ask
What is the program and System Information Protocol?
How is a networking protocol implemented in a computer?
How are communication protocols similar to programming languages?
- Communicating Systems
- Basic Requirements
- Protocol Design
- Protocol Development
- Further Reading
- External Links
One of the first uses of the term protocol in a data-commutation context occurs in a memorandum entitled A Protocol for Use in the NPL Data Communications Network written by Roger Scantleburyand Keith Bartlett in April 1967. On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The Network Control Program for the ARPANET was first implemented in 1970. The NCP interface allowed application software...
The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations. Operating sy...
Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kind of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kind of rules are said to express the semanticsof the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange 1. Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header and the payload. The actual message is carried in the payload. The...
Systems engineering principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite,within a conceptual framework. Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP). Concurrency can also be modeled using finite state machines, such as Mealy and Moore machines. Mealy and M...
For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. This activity is referred to as protocol development. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market-shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol.
Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A layering schemecombines both function and domain of use. The dominant layering schemes are the ones proposed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering. In networking...
1. Radia Perlman: Interconnections: Bridges, Routers, Switches, and Internetworking Protocols. 2nd Edition. Addison-Wesley 1999, ISBN 0-201-63448-1. In particular Ch. 18 on "network design folklore", which is also available online at http://www.informit.com/articles/article.aspx?p=20482 2. Gerard J. Holzmann: Design and Validation of Computer Protocols. Prentice Hall, 1991, ISBN 0-13-539925-4. Also available online at http://spinroot.com/spin/Doc/Book91.html 3. Douglas E. Comer (2000). Intern...Radia Perlman, Interconnections: Bridges, Routers, Switches, and Internetworking Protocols (2nd Edition). Addison-Wesley 1999. ISBN 0-201-63448-1. In particular Ch. 18 on "network design folklore".Gerard J. Holzmann, Design and Validation of Computer Protocols. Prentice Hall, 1991. ISBN 0-13-539925-4. Also available online at http://spinroot.com/spin/Doc/Book91.html
The Internet Protocol gets information from a source computer to a destination computer. It sends this information in the form of packets. There are two versions of the Internet Protocol currently in use: IPv4 and IPv6, with IPv4 being the version most used. IP also gives computers an IP addressto identify each other, much like a typical physical address. IP is the primary protocol in the Internet Layer of the Internet Protocol Suite, which is a set of communications protocols consisting of seven abstraction layers (see OSI model), The main purpose and task of IP is the delivery of datagrams from the source host (source computer) to the destination host (receiving computer) based on their addresses. To achieve this, IP includes methods and structures for putting tags (address information, which is part of metadata) within datagrams. The process of putting these tags on datagrams is called encapsulation.Think of an anology with the postal system. IP is similar to the U.S. Postal Syst...
ARPANET, the early ancestor of the internet, was designed to survive a nuclear war. If one computer was destroyed, communication between all the other computers would still work. Computer networks still follow this same design. Computers talking to each other handle the "smart" functions to simplify computer networks. The end nodes will check for errors instead of a central authority. Keeping the "smart" things on the end computers or nodesfollows the end-to-end principle. The Internet Protocol sends packets out without ensuring they arrive safely. This is best-effort delivery, and is unreliable. Packets could get messed up, lost, duplicated, or received out of order. Higher level protocols like the Transmission Control Protocol (TCP) ensure packets are delivered correctly. IP is also connectionless, so it does not keep track of communications. Internet Protocol Version 4 (IPv4) uses a checksumto check for errors in an IP header. Every checksum is unique to a source/destination comb...
In 1974, the Institute of Electrical and Electronics Engineers published a paper called "A Protocol for Packet Network Intercommunication". The paper described a way for computers to talk to each other using Packet Switching. A big part of this idea was the "Transmission Control Program". The Transmission Control Program was too big, so it split into TCP and IP. This model is now called the DoD Internet Model and Internet Protocol Suite, or the TCP/IP Model. Versions 0 to 3 of IP were experimental, and used between 1977 and 1979. IPv4 addresses will run out, because the number of possible addresses is finite. To fix this, the IEEE made IPv6 which had even more addresses. While IPv4 has 4.3 Billion addresses, IPv6 has 340 undecillionof them. This means we will never run out of IPv6 addresses. IPv5 was reserved for the Internet Stream Protocol, which was only used experimentally.
In general, a domain name identifies a network domain, or it represents an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, a server computer hosting a website, or the web site itself or any other service communicated via the Internet. In 2017, 330.6 million domain names had been registered.
AppleTalk is a discontinued proprietary suite of networking protocols developed by Apple Inc. for their Macintosh computers.AppleTalk includes a number of features that allow local area networks to be connected with no prior setup or the need for a centralized router or server of any sort.