Search results
Top results related to is c++ a programming language examples of two
Top Answer
Answered Oct 10, 2019 · 11 votes
TL:DR: The C++ abstract machine is a type of PRAM (Parallel Random Access Machine).
From the Von Neumann Languages Wikipedia article you linked:
Many widely used programming languages such as C, C++ and Java have ceased to be strictly von Neumann by adding support for parallel processing, in the form of threads.
Cease describes a transition from being to not-being. So yes, before C++11 added threads, C++ was strictly a Von Neumann language according to Wikipedia. (And after it's still basically a VN language; having multiple threads sharing the same address-space doesn't fundamentally change how C++ works.)
The interesting parts of being a Von Neumann architecture in this context:
- Having addressable RAM at all, allowing efficient access (modulo cache / paging) to any object at any time
- Storing the program in RAM: function pointers are possible and efficient, without requiring an interpreter
- Having a program counter that steps through instructions in the stored program: The natural model is an imperative programming language that does one thing at a time. This is so fundamental that it's easy to forget it's not the only model! (vs. an FPGA or ASIC or something where all gates potentially do something in parallel every clock cycle. Or a MIMD GPU where a computational "kernel" you write is run over all the data potentially in parallel, without implicit sequencing of what order each element is processed in. Or Computational RAM: put ALUs in the memory chips to bypass the Von Neumann bottleneck)
IDK why the wiki article mentions self-modifying code, though; like most languages, ISO C++ doesn't standardize that and is fully compatible with ahead-of-time compilation for a split-bus / split-address-space Harvard architecture. (No eval or anything else that would require an interpreter or JIT.) Or on a normal CPU (Von Neumann), strict W^X memory protection and never using mprotect to change page permissions from writeable to executable.
Of course most real C++ implementations do provide well-defined ways to write machine-code into a buffer and cast to a function pointer, as extensions. (e.g. GNU C/C++'s __builtin___clear_cache(start, end) is named for I-cache sync, but defined in terms of making it safe to call data as a function wrt. dead-store elimination optimizations as well, so it's possible for code to break without it even on x86 which has coherent I-caches.) So implementations can extend ISO C++ to take advantage of this feature of Von Neumann architectures; ISO C++ is intentionally limited in scope to allow for differences between OSes and stuff like that.
Note that being Von Neumann does not strictly imply supporting indirect addressing modes. Some early CPUs didn't, and self-modifying code (to rewrite an address hard-coded in an instruction) was necessary to implement things that we now use indirection for.
Also note that John Von Neumann was a really famous guy, with his name attached to a lot of fundamental things. Some of the connotations of Von Neumann architecture (as opposed to Harvard) aren't really relevant in all contexts. e.g. the "Von Neumann language" term doesn't so much care about Von Neumann vs. Harvard; It cares about stored-program with a program counter vs. something like Cellular Automata or a Turing machine (with a real tape). Getting extra bandwidth by using a separate bus (or just split caches) to fetch instructions (Harvard) is just a performance optimization, not a fundamental change.
What is an abstract machine model / model of computation anyway?
First of all, there are some models of computation that are weaker than Turing machines, like Finite State Machines. There are also non-sequential models of computation, for example Cellular Automata (Conway's Game of Life), where multiple things happen in parallel at each "step".
The Turing machine is the most widely-known (and mathematically simple) sequential abstract machine that is as "strong" as we know how to make. Without any kind of absolute memory addressing, just relative movement on the tape, it naturally provides infinite storage. This is important, and makes all other kinds of abstract machines very unlike real CPUs in some ways. Remember, these models of computation are used for theoretical computer science, not engineering. Problems like finite amounts of memory or performance aren't relevant to what's computable in theory, only in practice.
If you can compute something on a Turing machine, you can compute it on any other Turing-complete model of computation (by definition), perhaps with a much simpler program or perhaps not. Turing machines aren't very nice to program, or at least very different from assembly language for any real CPU. Most notably, the memory isn't random-access. And they can't easily model parallel computing / algorithms. (If you want to prove things about an algorithm in the abstract, having an implementation of it for an abstract machine of some sort is probably a good thing.)
It's also potentially interesting to prove what features an abstract machine needs to have in order to be Turing complete, so that's another motivation for developing more of them.
There are many others that are equivalent in terms of computability. The RAM machine model is most like real-world CPUs that have an array of memory. But being a simple abstract machine, it doesn't bother with registers. In fact, just to make things more confusing, it calls its memory cells an array of registers. A RAM machine supports indirect addressing, so the correct analogy to real world CPUs is definitely to memory, not CPU registers. (And there are an unbounded number of registers, each of unbounded size. Addresses keep going forever and every "register" needs to able to hold a pointer.) A RAM machine can be Harvard: program stored in a separate finite-state portion of the machine. Think of it like a machine with memory-indirect addressing modes so you can keep "variables" in known locations, and use some of them as pointers to unbounded-size data structures.
The program for an abstract RAM machine looks like assembly language, with load/add/jnz and whatever other selection of instructions you want it to have. The operands can be immediates or register numbers (what normal people would call absolute addresses). Or if the model has an accumulator, then you have a load/store machine with an accumulator a lot more like a real CPU.
If you ever wondered why a "3-address" machine like MIPS was called that instead of 3-operand, it's probably 1. because the instruction encoding needs room / I-fetch bandwidth through the Von Neumann bottleneck for 3 explicit operand locations (register number) and 2. because in a RAM abstract machine, operands are memory addresses = register numbers.
C++ can't be Turing complete: pointers have a finite size.
Of course, C++ has huge differences from a CS abstract machine model: C++ requires every type to have a compile-time-constant finite sizeof
, so C++ can't be Turing-complete if you include the infinite-storage requirement. Everything in Is C actually Turing-complete? on cs.SE applies to C++, too: the requirement that types have a fixed width is a showstopper for infinite storage. See also https://en.wikipedia.org/wiki/Random-access_machine#Finite_vs_unbounded
So Computer Science abstract machines are silly, what about the C++ Abstract machine?
They of course have their purposes, but there's a lot more interesting stuff we can say about C++ and what kind of machine it assumes if we get a bit less abstract and also talk about what a machine can do efficiently. Once we talk about finite machine machines and performance, these differences become relevant.
First, to run C++ at all, and second, to run without huge and/or unacceptable performance overheads. (e.g. the HW will need to support pointers fairly directly, probably not with self-modifying code that stores the pointer value into every load/store instruction that uses it. And that wouldn't work in C++11 where threading is part of the language: the same code can be operating on 2 different pointers at once.)
We can look in more detail at the model of computation assumed by the ISO C++ standard, which describes how the language works in terms of what happens on the Abstract Machine. Real implementations are required to run code on real hardware that runs "as-if" the abstract machine were executing C++ source, reproducing any/all observable behaviour (observable by other parts of the program without invoking UB).
C/C++ has memory and pointers, so it's pretty definitely a type of RAM machine.
Or these days, a Parallel random-access machine, adding shared memory to the RAM model, and giving each thread its own program counter. Given that std::atomic<> release-sequences make all previous operations visible to other threads, the "establishing a happens-before relationship" model of synchronization is based around coherent shared memory. Emulating it on top of something that required manual triggering of syncing / flushing would be horrible for performance. (Very clever optimizations may prove when that can be delayed so not every release-store has to suffer, but seq-cst will probably be horrible. seq-cst has to establish a global order of operations that all threads agree on; that's hard unless a store becomes visible to all other threads at the same time.)
But note that in C++, actual simultaneous access is UB unless you do it with atomic<T>. This allows the optimizer to freely use CPU registers for locals, temporaries, and even globals without exposing registers as a language feature. UB allows optimization in general; that's why modern C/C++ implementations are not portable assembly language.
The historical register
keyword in C/C++ means a variable can't have its address taken, so even a non-optimizing compiler can keep it in a CPU register, not memory. We're talking about CPU registers, not the computer science RAM Machine "register = addressable memory location". (Like rax..rsp/r8..r15 on x86, or r0..r31 on MIPS). Modern compilers do escape analysis and naturally keep locals in registers normally, unless they have to spill them. Other types of CPU registers are possible, e.g. a register-stack like x87 FP registers. Anyway, the register
keyword existed to optimize for this type of machine. But it doesn't rule out running on a machine with no registers, only memory-memory instructions.
C++ is designed to run well on a Von Neumann machine with CPU registers, but the C++ abstract machine (that the standard uses to define the language) doesn't allow execution of data as code, or say anything about registers. Each C++ thread does have its own execution context, though, and that models PRAM threads/cores each having their own program counter and callstack (or whatever an implementation uses for automatic storage, and for figuring out where to return.) In a real machine with CPU registers, they're private to each thread.
All real-world CPUs are Random Access Machines, and have CPU registers separate from addressable / indexable RAM. Even CPUs that can only compute with a single accumulator register typically have at least one pointer or index register that at least allows some limited array indexing. At least all CPUs that work well as C compiler targets.
Without registers, every machine instruction encoding would need absolute memory addresses for all operands. (Maybe like a 6502 where the "zero page", the low 256 bytes of memory, was special, and there are addressing modes that use a word from the zero page as the index or pointer, to allow 16-bit pointers without any 16-bit architectural registers. Or something like that.) See Why do C to Z80 compilers produce poor code? on RetroComputing.SE for some interesting stuff about real-world 8-bit CPUs where a fully compliant C implementation (supporting recursion and reentrancy) is quite expensive to implement. A lot of of the slowness is that 6502 / Z80 systems were too small to host an optimizing compiler. But even a hypothetical modern optimizing cross-compiler (like a gcc or LLVM back-end) would have a hard time with some things. See also a recent answer on What is an unused memory address? for a nice explanation of 6502's zero-page indexed addressing mode: 16-bit pointer from an absolute 8-bit address in memory + 8-bit register.
A machine without indirect addressing at all couldn't easily support array indexing, linked lists, and definitely not pointer variables as first-class objects. (Not efficiently anyway)
What's efficient on real machines -> what idioms are natural
Most of C's early history was on PDP-11, which is a normal mem + register machine where any register can work as a pointer. Automatic storage maps to registers, or to space on the callstack when they need to be spilled. Memory is a flat array of bytes (or chunks of char), no segmentation.
Array indexing is just defined in terms of pointer arithmetic instead of being its own thing perhaps because PDP-11 could do that efficiently: any register can hold an address and be dereferenced. (vs. some machines with only a couple special registers of pointer width, and the rest narrower. That was common on an 8-bit machine, but early 16-bit machines like PDP-11 had little enough RAM that one 16-bit register was enough for an address).
See Dennis Ritchie's article The Development of the C Language for more history; C grew out of B on PDP-7 Unix. (The first Unix was written in PDP-7 asm). I don't know much about PDP-7, but apparently BCPL and B also use pointers that are just integers, and arrays are based on pointer-arithmetic.
PDP-7 is an 18-bit word-addressable ISA. That's probably why B has no char type. But its registers are wide enough to hold pointers so it does naturally support B and C's pointer model (that pointers aren't really special, you can copy them around and deref them, and you can take the address of anything). So flat memory model, no "special" area of memory like you find on segmented machines or some 8-bit micros with a zero page.
Things like C99 VLAs (and unlimited size local variables) and unlimited reentrancy and recursion imply a callstack or other allocation mechanism for function local-variable context (aka stack frames on a normal machine that uses a stack pointer.)
1/5
Give a = 12 and b = 36 write a C function/macro that returns 3612 without using arithmetic, strings and predefined functions.
We strongly recommend you to minimize your browser and try this yourself first.
Below is one solution that uses String Token-Pasting Operator (##) of C macros. For example, the expression “a##b” prints concatenation of ‘a’ and ‘b’.
Below is a working C code.
#include <stdio.h> #define merge(a, b) b##a int main(void) { printf("%d ", merge(12, 36)); return 0; } -
Output:
3612
Thanks to an anonymous user to suggest this solution.
2/5
Give a = 12 and b = 36 write a C function/macro that returns 3612 without using arithmetic, strings and predefined functions.
We strongly recommend you to minimize your browser and try this yourself first.
Below is one solution that uses String Token-Pasting Operator (##) of C macros. For example, the expression “a##b” prints concatenation of ‘a’ and ‘b’.
Below is a working C code.
#include <stdio.h> #define merge(a, b) b##a int main(void) { printf("%d ", merge(12, 36)); return 0; } -
Output:
3612
Thanks to an anonymous user to suggest this solution.
3/5
Top Answer
Answered Feb 18, 2010 · 3 votes
I would seriously suggest that you'd use Python for an applicatin like this. It will lift the burden of decoding the strigns (not to mention allocating memory for them and the like). You will be free to concentrate on your problem, instead of problems of the language.
For example, if the sentence above is contained in an utf-8 file, and you are uisng python2.x. If you use python 3.x it is even more readible, as you don't have to prefix the unicode strings with 'u" ', as in this example (but you will be missing a lot of 3rd party libraries:
separators = [u"।", u",", u"."]text = open("indiantext.txt").read()#This converts the encoded text to an internal unicode object, where# all characters are properly recognized as an entity:text = text.decode("utf-8")-#this breaks the text on the white spaces, yielding a list of words:words = text.split()-counter = 1-output = ""for word in words: #if the last char is a separator, and is joined to the word: if word[-1] in separators and len(word) > 1: #word up to the second to last char: output += word[:-1] + u"(%d) " % counter counter += 1 #last char output += word[-1] + u"(%d) " % counter else: output += word + u"(%d) " % counter counter += 1-print output-
This is an "unfolded" example, As you get more used to Python there are shorer ways to express this. You can learn the basics of teh language in just a couple of hours, following a tutorial. (for example, the one at http://python.org itself)
Other Answers
Answered Feb 18, 2010 · 7 votes
Wow, already 6 answers and not a single one actually does what mgj wanted. jkp comes close, but then drops the ball by deleting the daṇḍa.
Perl to the rescue. Less code, fewer bugs.
use utf8; use strict; use warnings;use Encode qw(decode);my $index;join ' ', map { $index++; "$_($index)" } split /\s+|(?=।)/, decode 'UTF-8', <>;# returns भारत(1) का(2) इतिहास(3) काफी(4) समदध(5) एव(6) विसतत(7) ह(8) ।(9)-
edit: changed to read from STDIN as per comment, added best practices pragmas
Other Answers
Answered Feb 18, 2010 · 6 votes
If you are working in C++ and decide that UTF-8 is a viable encoding for your application you could look at utfcpp which is a library that provides many equivalents for types found in the stdlib (such as streams and string processing functions) but abstracts away the difficulties of dealing with a variable length encoding like UTF8.
If on the other hand you are free to use any language, I would say that doing something like this in something like Python would be far easier: it's unicode support is very good as are the bundled string processing routines.
#!/usr/bin/env python# encoding: utf-8-string = u"भारत का इतिहास काफी समृद्ध एवं विस्तृत है।"parts = []for part in string.split(): parts.extend(part.split(u"।"))print "No of Parts: %d" % len(parts)print "Parts: %s" % parts-
Outputs:
No of Parts: 9Parts: [u'\u092d\u093e\u0930\u0924', u'\u0915\u093e', u'\u0907\u0924\u093f\u0939\u093e\u0938', u'\u0915\u093e\u092b\u0940', u'\u0938\u092e\u0943\u0926\u094d\u0927', u'\u090f\u0935\u0902', u'\u0935\u093f\u0938\u094d\u0924\u0943\u0924', u'\u0939\u0948', u'']-
Also, since you are doing natural language processing, you may want to take a look at the NLTK library for Python which has a wealth of tools for just this kind of job.
4/5
Top Answer
Answered Jun 21, 2013 · 5 votes
These suggestions are specific to GCC. You can use the gcov
coverage tool to get a detailed account of which parts of a program have been executed and how often. You have to pass some special options to GCC to generate the proper instrumentation and output for gcov to process.
--coverage
This option is used to compile and link code instrumented for coverage analysis. The option is a synonym for -fprofile-arcs -ftest-coverage (when compiling) and -lgcov (when linking). See the documentation for those options for more details.
Then, when you execute your program, some profiling and coverage data is generated. You can then invoke gcov to analyze that output. Below is a example of output taken from the link above:
-: 0:Source:tmp.c -: 0:Graph:tmp.gcno -: 0:Data:tmp.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include <stdio.h> -: 2: -: 3:int main (void) 1: 4:{ 1: 5: int i, total; -: 6: 1: 7: total = 0; -: 8: 11: 9: for (i = 0; i < 10; i++) 10: 10: total += i; -: 11: 1: 12: if (total != 45) #####: 13: printf ("Failure\n"); -: 14: else 1: 15: printf ("Success\n"); 1: 16: return 0; -: 17:}
If you want to implement your own instrumentation to log the call history of the program, you can use the -finstrument-functions
and its related options on GCC.
-finstrument-functions
Generate instrumentation calls for entry and exit to functions. Just after function entry and just before function exit, the following profiling functions are called with the address of the current function and its call site. (On some platforms, __builtin_return_address does not work beyond the current function, so the call site information may not be available to the profiling functions otherwise.)
void __cyg_profile_func_enter (void *this_fn, void *call_site); void __cyg_profile_func_exit (void *this_fn, void *call_site);
The first argument is the address of the start of the current function, which may be looked up exactly in the symbol table.
In C++, your implementation of those hooks should be declared as extern "C". You can implement the hooks to log each time a function is called. You don't get the function names, but you can post process the pointers afterward with objdump
or addr2line
.
5/5
www.geeksforgeeks.org › cpp-programming-examplesC++ Programming Examples - GeeksforGeeks
www.geeksforgeeks.org › cpp-programming-examplesMar 15, 2023 · C++ is a general-purpose programming language and is widely used nowadays for competitive programming. It has imperative, object-oriented, and generic programming features. C++ runs on lots of platforms like Windows, Linux, Unix, Mac, etc.
www.programiz.com › cpp-programming › examplesC++ Examples | Programiz
www.programiz.com › cpp-programming › examples- C++ "Hello, World!" Program.
- C++ Program to Print Number Entered by User.
- C++ Program to Add Two Numbers.
- C++ Program to Find Quotient and Remainder.
People also ask
What are examples of compiled languages?
- Examples of compiled languages include: Often, markup languages such as Hypertext Markup Language (HTML) are classified as programming languages. Technically, markup languages are not considered to be the same as programming languages.
5 Types of Programming Languages | Coursera
www.coursera.org/articles/types-programming-languageIs C++ a good programming language?
- C++ is one of the world's most popular programming languages. C++ can be found in today's operating systems, Graphical User Interfaces, and embedded systems. C++ is an object-oriented programming language which gives a clear structure to programs and allows code to be reused, lowering development costs.
C++ Introduction - W3Schools
www.w3schools.com/cpp/cpp_intro.aspWhat are the different types of functional programming languages?
- The result will vary depending on what data you input into the function. Some popular functional programming languages include: 3. Object-oriented programming languages (OOP) This type of language treats a program as a group of objects composed of data and program elements, known as attributes and methods.
5 Types of Programming Languages | Coursera
www.coursera.org/articles/types-programming-languageWhat are the different types of programming languages?
- Keep in mind that some languages may fall under more than one type: 1. Procedural programming languages A procedural language follows a sequence of statements or commands in order to achieve a desired output. Each series of steps is called a procedure, and a program written in one of these languages will have one or more procedures within it.
5 Types of Programming Languages | Coursera
www.coursera.org/articles/types-programming-languagewww.coursera.org › articles › types-programming-language5 Types of Programming Languages | Coursera
www.coursera.org › articles › types-programming-languageMar 29, 2024 · Examples of compiled languages include: C, C++, and C#. Rust. Erlang. Markup language. Often, markup languages such as Hypertext Markup Language (HTML) are classified as programming languages. Technically, markup languages are not considered to be the same as programming languages.
www.w3schools.com › cpp › cpp_introC++ Introduction - W3Schools
www.w3schools.com › cpp › cpp_introC++ is an object-oriented programming language which gives a clear structure to programs and allows code to be reused, lowering development costs. C++ is portable and can be used to develop applications that can be adapted to multiple platforms.
cplusplus.com › doc › tutorialC++ Language - C++ Users
cplusplus.com › doc › tutorial- Introduction. Compilers.
- Basics of C++ Structure of a program. Variables and types. Constants. Operators. Basic Input/Output.
- Program structure. Control Structures. Functions. Overloads and templates. Name visibility.
- Compound data types. Arrays. Character sequences. Pointers. Dynamic Memory. Data structures. Other data types.
www.programiz.com › cpp-programmingLearn C++ Programming
www.programiz.com › cpp-programmingC++ is a leading programming language used in game development, virtual reality, real-time simulation and high-frequency trading, where efficiency and speed matter. One reason why C++ is so effective is its ability to work very closely with hardware.
resources.github.com › software-development › whatWhat is a Programming Language? Definition and Types
resources.github.com › software-development › whatC++: A high-performance, object-oriented language that is used in system programming and scientific computing as well as used to develop operating systems, browsers, and games. Strengths : High performance, extensive library support.
Searches related to is c++ a programming language examples of two