Sunday, February 23, 2014

Mac OS


Mac OS is a series of graphical user interface-based operating systems developed by Apple Inc. for their Macintosh line of computer systems. The original version was the integral and unnamed system software first introduced in 1984 with the original Macintosh, and referred to simply as the "System" software. The System was renamed to Mac OS in 1996 with version 7.6. The System is credited with popularizing the graphical user interface concept.

Mac OS releases have existed in two major series. Up to major revision 9, from 1984 to 2000, it is historically known as Classic Mac OS. Major revision 10 (revisioned minorly, such as 10.0 through 10.9), from 2001 to present, has had the brand name of Mac OS X and now OS X. Both series share a general interface design and some shared application frameworks for compatibility, but also have deeply different architectures.


Design concept

 Apple's original inception of the System deliberately sought to minimize the user's conceptual awareness of the operating system. Tasks which required more operating system knowledge on other systems would be accomplished by intuitive mouse gestures and simple graphic controls on a Macintosh, making the system more user-friendly and easily mastered.  This would differentiate it from then current systems, such as MS-DOS, which were more technically challenging to operate.

The core of the system software was held in ROM, with updates originally provided free of charge by Apple dealers (on floppy disk). The user's involvement in an upgrade of the operating system was also minimized to running an installer, or simply replacing system files. This simplicity is what differentiated the product from others.
 



OS X

OS X, introduced as Mac OS X and renamed OS X in 2012, is the latest version of Apple's operating system. Although it is officially designated as simply "version 10" of the Mac OS, it has a history largely independent of the earlier Mac OS releases.

The operating system is the successor to Mac OS 9 and the "classic" Mac OS. It is however a Unix-like operating system, based on the NeXTSTEP operating system and the Mach kernel which Apple acquired after purchasing NeXT Computer - with its CEO Steve Jobs returning to Apple at that time. OS X also makes use of the BSD code. There have been ten significant releases of OS X, the most recent being OS X 10.9, referred to as Mavericks. Prior to 10.9 came 10.8 - Mountain Lion, 10.7 - Lion, 10.6 - Snow Leopard, 10.5 - Leopard, 10.4 - Tiger, 10.3 - Panther, 10.2 - Jaguar, 10.1 - Puma, 10.0 - Cheetah.

OS X also had six significant releases as OS X Server. The first of these, Mac OS X Server 1.0, was released in beta in 1999. The server versions are architecturally identical to the client versions, with the differentiation found in their inclusion of tools for server management, including tools for managing OS X-based workgroups, mail servers, and web servers, amongst other tools. As of the name change to OS X, OS X Server is no longer sold as a separate operating system product. The server tools could then be added to the singular OS X product, giving the same functionality.

OS X Server is available as an operating system to-order on Mac Mini and Mac Pro computers as a part of a server package. Unlike the client version, OS X Server can be run in a virtual machine using emulation software such as Parallels Desktop and VMware Fusion.

OS X is also the basis for iOS, (previously iPhone OS) used on Apple's iPhone, iPod Touch, iPad, and Apple TV.

 






Application Software


Application software is all the computer software that causes a computer to perform useful tasks (compare with computer viruses) beyond the running of the computer itself. A specific instance of such software is called a software application, program, application or app.

The term is used to contrast such software with system software, which manages and integrates a computer's capabilities but does not directly perform tasks that benefit the user. The system software serves the application, which in turn serves the user.

Examples include accounting software, enterprise software, graphics software, media players, and office suites. Many application programs deal principally with documents. Applications may be bundled with the computer and its system software or published separately, and can be coded as university projects.

Application software applies the power of a particular computing platform or system software to a particular purpose.

Some applications are available in versions for several different platforms; others have narrower requirements and are thus called, for example, a Geography application for Windows, an Android application for education, or Linux gaming. Sometimes a new and popular application arises which only runs on one platform, increasing the desirability of that platform. This is called a killer application.



In information technology, an application is a computer program designed to help people perform an activity. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming tools (with which computer programs are created). Depending on the activity for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Some application packages offer considerable computing power by focusing on a single task, such as word processing; others, called integrated software, offer somewhat less power but include several applications. User-written software tailors systems to meet the user's specific needs. User-written software includes spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.


Application software classification

Applications can also be classified by computing platform such as a particular operating system, delivery network such as in cloud computing and Web 2.0 applications, or delivery devices such as mobile apps for mobile devices.

The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via command-line interface or graphical user interface. This does not include application software bundled within operating systems such as a software calculator or text editor.


Information worker software


  • Enterprise resource planning
  • Accounting software
  • Task and scheduling
  •  Field service management
  • Data management
  • Contact management
  • Spreadsheet
  • Personal database
  • Documentation
  •  Document automation/assembly
  • Word processing
  • Desktop publishing software
  • Diagramming software
  • Presentation software
  • Email
  • Reservation systems
  • Financial software
  • Day trading software
  • Banking software
  • Clearing systems
  • Arithmetic software


 

Content access software

  • Electronic media software
  • Web browser
  • Media players
  • Hybrid editor players



Entertainment software

  • Screen savers
  • Video games
  • Arcade games
  • Video game console emulator
  • Personal computer games
  • Console games
  • Mobile games


Educational software

  • Classroom management
  • Reference software
  • Sales readiness software
  • Survey management



Enterprise infrastructure software


  • Business workflow software
  • Database management system (DBMS) software
  • Digital asset management (DAM) software
  • Document management software
  • Geographic information system (GIS) software


Simulation software

  • Computer simulators
  • Scientific simulators
  • Social simulators
  • Battlefield simulators
  • Emergency simulators
  • Vehicle simulators
  • Flight simulators
  • Driving simulators
  • Simulation games
  • Vehicle simulation games

 

Media development software

  • Image organizer
  • Media content creating/editing
  • 3D computer graphics software
  • Animation software
  • Graphic art software
  • Image editing software
  • Raster graphics editor
  • Vector graphics editor
  • Video editing software
  • Sound editing software
  • Digital audio editor
  • Music sequencer
  • Scorewriter
  • Hypermedia editing software
  • Web development software
  • Game development tool

 

Product engineering software

  • Hardware engineering
  • Computer-aided engineering
  • Computer-aided design (CAD)
  • Finite element analysis
  • Software engineering
  • Computer language editor
  • Compiler software
  • Integrated development environment
  • Game development software
  • Debuggers
  • Program testing tools
  • License manager

 

Computer program


A computer program, or just a program, is a sequence of instructions, written to perform a specified task with a computer. A computer requires programs to function, typically executing the program's instructions in a central processor.The program has an executable form that the computer can use directly to execute the instructions. The same program in its human-readable source code form, from which executable programs are derived (e.g., compiled), enables a programmer to study and develop its algorithms. A collection of computer programs and related data is referred to as the software.

Computer source code is typically written by computer programmers. Source code is written in a programming language that usually follows one of two main paradigms: imperative or declarative programming. Source code may be converted into an executable file (sometimes called an executable program or a binary) by a compiler and later executed by a central processing unit. Alternatively, computer programs may be executed with the aid of an interpreter, or may be embedded directly into hardware.

Computer programs may be ranked along functional lines: system software and application software. Two or more computer programs may run simultaneously on one computer from the perspective of the user, this process being known as multitasking.

 



 

Programming



 Computer programming is the iterative process of writing or editing source code. Editing source code involves testing, analyzing, refining, and sometimes coordinating with other programmers on a jointly developed program. A person who practices this skill is referred to as a computer programmer, software developer, and sometimes coder.

The sometimes lengthy process of computer programming is usually referred to as software development. The term software engineering is becoming popular as the process is seen as an engineering discipline.


 

Paradigms

Computer programs can be categorized by the programming language paradigm used to produce them. Two of the main paradigms are imperative and declarative.

Programs written using an imperative language specify an algorithm using declarations, expressions, and statements.[4] A declaration couples a variable name to a datatype. For example: var x: integer; . An expression yields a value. For example: 2 + 2 yields 4. Finally, a statement might assign an expression to a variable or use the value of a variable to alter the program's control flow. For example: x := 2 + 2; if x = 4 then do_something(); One criticism of imperative languages is the side effect of an assignment statement on a class of variables called non-local variables.[5]

Programs written using a declarative language specify the properties that have to be met by the output. They do not specify details expressed in terms of the control flow of the executing machine but of the mathematical relations between the declared objects and their properties. Two broad categories of declarative languages are functional languages and logical languages. The principle behind functional languages (like Haskell) is to not allow side effects, which makes it easier to reason about programs like mathematical functions.[5] The principle behind logical languages (like Prolog) is to define the problem to be solved — the goal — and leave the detailed solution to the Prolog system itself.[6] The goal is defined by providing a list of subgoals. Then each subgoal is defined by further providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is backtracked and another path is systematically attempted.

The form in which a program is created may be textual or visual. In a visual language program, elements are graphically manipulated rather than textually specified.

 

Execution and storage

Typically, computer programs are stored in non-volatile memory until requested either directly or indirectly to be executed by the computer user. Upon such a request, the program is loaded into random access memory, by a computer program called an operating system, where it can be accessed directly by the central processor. The central processor then executes ("runs") the program, instruction by instruction, until termination. A program in execution is called a process. Termination is either by normal self-termination or by error — software or hardware error.

Automatic program generation

Generative programming is a style of computer programming that creates source code through generic classes, prototypes, templates, aspects, and code generators to improve programmer productivity. Source code is generated with programming tools such as a template processor or an integrated development environment. The simplest form of source code generator is a macro processor, such as the C preprocessor, which replaces patterns in source code according to relatively simple rules.

Software engines output source code or markup code that simultaneously become the input to another computer process. Application servers are software engines that deliver applications to client computers. For example, a Wiki is an application server that lets users build dynamic content assembled from articles. Wikis generate HTML, CSS, Java, and JavaScript which are then interpreted by a web browser.

 

Functional categories

Computer programs may be categorized along functional lines. The main functional categories are system software and application software. System software includes the operating system which couples computer hardware with application software.The purpose of the operating system is to provide an environment in which application software executes in a convenient and efficient manner. In addition to the operating system, system software includes utility programs that help manage and tune the computer. If a computer program is not system software then it is application software. Application software includes middleware, which couples the system software with the user interface. Application software also includes utility programs that help users solve application problems, like the need for sorting.

Sometimes development environments for software development are seen as a functional category on its own, especially in the context of human-computer interaction and programming language design.[clarification needed] Development environments gather system software (such as compilers and system's batch processing scripting languages) and application software (such as IDEs) for the specific purpose of helping programmers create new programs. 


















Desktop computer


File:Computer-aj aj ashton 01.svg
A desktop computer is a personal computer in a form intended for regular use at a single location, as opposed to a mobile laptop or portable computer. Early desktop computers were designed to lie flat on the desk, while modern towers stand upright. Most modern desktop computers have separate screens and keyboards.

Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small; the type of computers most commonly used were minicomputers, which were themselves desk-sized. Early personal computers, like the IBM PC, were enclosed in "desktop" cases, horizontally oriented to have the display screen placed on top, thus saving space on the user's actual desk. Over the course of the 1990s, desktop cases gradually became less common than the more-accessible tower cases that may be located on the floor under the desk rather than on a desk.



An all-in-one PC integrates the system's internal components into the same case as the display, allowing for easier portability and a smaller footprint, especially on designs using flat panel displays. Some recent all-in-one models also include touchscreen displays.

Apple has manufactured several popular examples of all-in-one computers, such as the original Macintosh of the mid-1980s and the iMac of the late 1990s and 2000s. This form factor was popular during the early 1980s for computers intended for professional use such as the Kaypro II, Osborne 1, TRS-80 Model II and Compaq Portable. Many manufacturers of home computers like Commodore and Atari included the computer's motherboard into the same enclosure as the keyboard; these systems were most often connected to a television set for display.

Like laptops, some all-in-one desktop computers are characterized by an inability to customize or upgrade internal components, as the systems' cases do not provide easy access except through panels which only expose connectors for RAM or storage device upgrades. However, newer models of all-in-one computers have changed their approach to this issue. Many of the current manufacturers are using standard off-the-shelf components and are designing upgrade convenience into their products.



 When referring to an operating system or GUI, the Desktop is a system of organization of icons on a screen. The Microsoft Windows Desktop was first introduced with Microsoft Windows 95 and has been included with all versions of Windows since then. Below is a basic example of the Microsoft Windows desktop.




In the above picture, is an example of the Microsoft Windows 95 Desktop. In this picture, there are Desktop icons on the left-hand-side of the window, blue and white clouds for the wallpaper, and the Taskbar is found on the bottom of the screen.

Tip: Press the shortcut key Windows key + D at any time to get to the Windows Desktop.

What icons and items are found on the Windows Desktop?












Some of the most common icons you're likely to find on the Desktop include the My Computer icon, Recycle Bin, your Internet browser icon (e.g. Internet Explorer), and My Documents. On the Windows Desktop, you'll also have access to the Windows Start Menu through the Start button on the Taskbar and the Windows Notification Area.




In some versions of Windows, you may be missing some or all of these icons, you can change what default icons show by following the steps in the below link.

Units of information


In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the information contents or entropy of random variables.

The most common units are the bit, the capacity of a system which can exist in only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (binary power prefixes). Information capacity is a dimensionless quantity, because it refers to a count of binary symbols.

In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm logb N of the number N of possible states of that system. Changing the basis of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logc N = (logc b) logb N. Therefore, the choice of the basis b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.

When b is 2, the unit is the "bit" (a contraction of binary digit). A system with 8 possible states, for example, can store up to log28 = 3 bits of information. Other units that have been named include:

    Base b = 3: the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.[2]
    Base b = 10: the unit is called decimal digit, Hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.[1][3][4][5]
    Base b = e, the base of natural logarithms: the unit is called a nat, nit, or nepit (from Neperian), and is worth log2 e (≈ 1.443) bits.[1]

The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are sometimes easier to handle than logarithms in other bases.

Byte


 The byte /ˈbaɪt/ is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers optimize for this common usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit size.

The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity

associated at the time with the byte.

Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits — that is, an octet. A byte can represent 256 (28) distinct values, such as the integers 0 to 255, or -128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte. Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.


Considerable confusion exists about the meanings of the SI (or metric) prefixes used with the unit byte, especially concerning prefixes such as kilo (k or K) and mega (M) as shown in the chart Prefixes for bit and byte. Computer memory is designed with binary logic, multiples are expressed in powers of 2. Some portions of the software and computer industries often use powers-of-2 approximations of the SI-prefixed quantities, while producers of computer storage devices prefer strict adherence to SI powers-of-10 values. This is the reason for specifying computer hard drive capacities of, say, 100 GB, when it contains 93 GiB of storage  space.

While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta, illustrated in the linear-log graph (at right) of difference versus storage size.

 

Nibble

A group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit. 


Word, block, and page

Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 8, 24, 32, 36, 56, 64, 80 bits or others.

Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").

Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.

Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.


Systematic multiples

Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000 (kilobit or kbit), mega- = 106 = 1000000 (megabit or Mbit) and giga = 109 = 1000000000 (gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (kB = 8000 bits), megabyte (1 MB = 8000000bits), and gigabyte (1 GB = 8000000000bits).

However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = 268435456 bytes. To avoid such unwieldy numbers, people have often misused the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 = 1048576, and giga for 230 = 1073741824, and so on. For example, a random access memory chip with a capacity of  228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.






Symbol
Prefix
SI Meaning
   Binary meaning     
Size difference
K
kilo
103   = 10001
   210 = 10241
2.40%
M
mega
106   = 10002
   220 = 10242
4.86%
G
giga
109   = 10003
   230 = 10243
7.37%
T
tera
1012 = 10004
   240 = 10244
9.95%
P
peta
1015 = 10005
   250 = 10245
12.59%
E
exa
1018 = 1000
   260 = 10246
15.29%
Z
zetta
1021 = 10007
   270 = 10247
18.06%
Y
yotta
1024 = 10008
   280 = 10248
20.89%



















In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was never consistently applied.

On the other hand, for external storage systems (such as optical disks), the SI prefixes were commonly used with their decimal values (powers of 10). There have been many attempts to resolve the confusion by providing alternative notations for power-of-two multiples. In 1998 the International Electrotechnical Commission (IEC) issued a standard for this purpose, namely a series of binary prefixes that use 1024 instead of 1000 as the main radix.























Symbol
   Prefix

Ki
kibi,        binary kilo
  1 kikibyte (KiB)
      210 bytes    
1024 B
Mi
mebi,     binary mega  
  1 mebibyte (MiB)
     220 bytes
1024 KiB
Gi
gibi,       binary giga 
  1 gigibyte (GiB)
     230 bytes
1024 MiB
Ti
tebi,      binary tera
  1 tebibyte (TiB)
     240 bytes
1024 GiB
Pi
pebi,      binary peta
  1 pepibyte (PiB)
     250 bytes
1024 TiB
Ei
exbi,     binary exa 
  1 exbibyte (EiB)
     260 bytes
1024 PiB





















The JEDEC memory standards however define uppercase K, M, and G for the binary powers 210, 220, 230, and 240 to reflect common usage.

















Size examples

  •  1 bit - answer to an yes/no question
  • 1 byte - a number from 0 to 255.
  • 90 bytes: enough to store a typical line of text from a book.
  • 512 bytes = ½ KiB: the typical sector of a hard disk.
  • 1024 bytes = 1 KiB: the classical block size in UNIX filesystems.
  • 2048 bytes = 2 KiB: a CD-ROM sector.
  • 4096 bytes = 4 KiB: a memory page in x86 (since Intel 80386).
  • 4 kB: about one page of text from a novel.
  • 120 kB: the text of a typical pocket book.
  • 1 MB - a 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth).
  • 3 MB - a three minute song (128k bitrate)
  • 650-900 MB - an CD-ROM
  • 1 GB - 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s
  • 15 GB - number of bytes Google offers you for free.
  • 8/16 GB - size of a normal flash drive
  • 4 TB - The size of a $300 hard disk
  • 966 EB - prediction of the volume of the whole internet in 2015