SCIENCE

QUESTION

Individual project
1. Here are the salaries of 25 IT professionals in 2009 in Chicago:
$109,000

$95,000

$ 65,000 $65,000 $ 59,000 $ 180,000 $ 101,325 $ 130,000
$ 81,500 $69,000 $71,500 $ 74,880 $64,000 $72,000
$ 71,000 $ 82,300 $49,000 $51,200 $ 39,000 $48,500 $64,330
$ 41,100 $52,330 $82,000

a. Make a frequency distribution using five classes with the upper class limit of the first class as the
lower class limit of the second.
b. Make a histogram from your frequency distribution.

2. Find the prices of 10 different printers for your PC. Compute the mean, median and mode of
these prices.

3. In the following table you can see the Memory Usage at a given moment of a PC computer.

a. Find the measures of tendency and the measures of dispersion of the memory usage in Kb.
b. Once you complete the computation of the measures,  complete a scatter plot of those values.
c. Identify the values that are responsible for the variance of the dataset, give a possible solution on
how the computer user could decrease his Memory usage variance.

4. There is a mathematical theory called queuing theory that studies ways in which computer jobs
are fed in CPUs and researches on how these can be reduced to a minimum. Show how can a
computer estimate the average number of jobs waiting at a queue?

Suppose that in a 5 sec interval jobs arrive as indicated in the following table ( Arrival time is
assumed to be at the beginning of each second) In the first second jobs A and B arrive. During
the second second B moves to the head of the line ( A job is completed as it took 1 sec to be
served), and C and D jobs arrive and so on.
Find:
a. The mean number of jobs in line
b.  The mode of the number of jobs in line
Time in seconds Jobs
1 A,B
2 C,D
3
4 E,F
5

5. Many times, we are required to use statistical measures to try and construct a problem.
We run a program for a 10 different inputs. The times are measures in 1-second intervals and none of
them took 0 secs.
a. Suppose the standard deviation of a set of times we run the program is 0. What does this tell
you about the running times?
b. Suppose that the mean of the times is 1000.9 sec while the median is 1 sec. Explain what do you
know about the program running times for all 10 different inputs?
c. Assume now that the mean of the times is 1000.9 sec while the median is 1 sec  and the
variance is 9998000. Explain what do you know about the program running times for all 10
different inputs
SOLUTION

Discusion board 1

Member   X Member  Y Member  Z

MEMBER Signal

President V President Output

1

0

0

1

1

1

1

1

1

0

1

1

0

0

1

0

1

1

0

1

0

1

1

1

1

1

1

1

0

0

0

0

0

0

0

0

1

0

1

1

0

0

0

0

1

1

0

1

0

0

1

1

1

1

1

1

 

If  (x==1) or(y==1) or (z==1) and (P==1) and (vp==1) then output=1

If  (x==0) or(y==1) or (z==1) and (P==1) and (vp==1) then output=1

If  (x==0) or(y==0) or (z==1) and (P==1) and (vp==1) then output=1

If  (x==0) or(y==0) or (z==0) and (P==1) and (vp==1) then output=0

 

If  (x==1) or(y==0) or (z==1) and (P==1) and (vp==1) then output=1

If  (x==1) or(y==1) or (z==0) and (P==1) and (vp==1) then output=1

If  (x==1) or(y==1) or (z==1) and (P==0) and (vp==1) then output=0

If  (x==1) or(y==1) or (z==1) and (P==1) and (vp==0) then output=0

If  (x==1) or(y==1) or (z==1) and (P==0) and (vp==0) then output=0

If  (x==0) or(y==0) or (z==0) and (P==0) and (vp==0) then output=0

 

Discusion board 2

For example:

Color Hexadecimal RGB
aquamarine 7FFFD4 127,255,212

In Hex 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F are the nos where A=10, B=11, C=12, D=13, E=14 & F=15.

Hex  7FFFD4 ->      

7F  = 7 * 16 + 15 = 127

FF = 15 * 16 + 15 = 255

D4 = D * 16  + 4 = 13 * 16 + 4 = 212

 

Decimal System is based on 10.

Therefore 3481 = 3 * 10 to the power 3 + 4 * 10 to the power 2 + 8 * 10 to the power 1 + 1

Binary System is based on 2.

Divide 3481 with 2.   1740 remainder 1. Divide 1740 with 2

 

 

In Decimal

1*210+1*29+0*28+ 1*27+1*26+0*25+0*24+ 1*23+0*22+0*21+1*20

In Hex

1*210+1*29+0*28+                   1*27+1*26+0*25+0*24+                        1*23+0*22+0*21+1*20

0110                                   1100                                                1001

In Hex

6                                          C                                                      9

06C9

 

 

 

Discusion board 3

Sampling

Sampling methods are classified as either probability or nonprobability. In probability samples, each member of the population has a known non-zero probability of being selected. Probability methods include random sampling, systematic sampling, and stratified sampling. In nonprobability sampling, members are selected from the population in some nonrandom manner. These include convenience sampling, judgment sampling, quota sampling, and snowball sampling. The advantage of probability sampling is that sampling error can be calculated. Sampling error is the degree to which a sample might differ from the population. When inferring to the population, results are reported plus or minus the sampling error. In nonprobability sampling, the degree to which the sample differs from the population remains unknown.

Random sampling is the purest form of probability sampling. Each member of the population has an equal and known chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased.

Systematic sampling is often used instead of random sampling. It is also called an Nth name selection technique. After the required sample size has been calculated, every Nth record is selected from a list of population members. As long as the list does not contain any hidden order, this sampling method is as good as the random sampling method. Its only advantage over the random sampling technique is simplicity. Systematic sampling is frequently used to select a specified number of records from a computer file.

Stratified sampling is commonly used probability method that is superior to random sampling because it reduces sampling error. A stratum is a subset of the population that share at least one common characteristic. Examples of stratums might be males and females, or managers and non-managers. The researcher first identifies the relevant stratums and their actual representation in the population. Random sampling is then used to select a sufficient number of subjects from each stratum. “Sufficient” refers to a sample size large enough for us to be reasonably confident that the stratum represents the population. Stratified sampling is often used when one or more of the stratums in the population have a low incidence relative to the other stratums.

Convenience sampling is used in exploratory research where the researcher is interested in getting an inexpensive approximation of the truth. As the name implies, the sample is selected because they are convenient. This nonprobability method is often used during preliminary research efforts to get a gross estimate of the results, without incurring the cost or time required to select a random sample.

Judgment sampling is a common nonprobability method. The researcher selects the sample based on judgment. This is usually and extension of convenience sampling. For example, a researcher may decide to draw the entire sample from one “representative” city, even though the population includes all cities. When using this method, the researcher must be confident that the chosen sample is truly representative of the entire population.

Quota sampling is the nonprobability equivalent of stratified sampling. Like stratified sampling, the researcher first identifies the stratums and their proportions as they are represented in the population. Then convenience or judgment sampling is used to select the required number of subjects from each stratum. This differs from stratified sampling, where the stratums are filled by random sampling.

Snowball sampling is a special nonprobability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces the likelihood that the sample will represent a good cross section from the population.

Discusion board 4

Web analytics is the measurement, collection, analysis and reporting of internet data for purposes of understanding and optimizing web usage.[1] It is often done without the permission or knowledge of the user, in which case it becomes a breach of web browser security.

Web analytics is not just a tool for measuring web traffic but can be used as a tool for business research and market research, and to assess and improve the effectiveness of a web site. Web analytics applications can also help companies measure the results of traditional print advertising campaigns. It helps one to estimate how traffic to a website changes after the launch of a new advertising campaign. Web analytics provides information about the number of visitors to a website and the number of page views. It helps gauge traffic and popularity trends which is useful for market research.

There are two categories of web analytics; off-site and on-site web analytics.

Off-site web analytics refers to web measurement and analysis regardless of whether you own or maintain a website. It includes the measurement of a website’s potential audience (opportunity), share of voice (visibility), and buzz (comments) that is happening on the Internet as a whole.

On-site web analytics measure a visitor’s journey once on your website. This includes its drivers and conversions; for example, which landing pages encourage people to make a purchase. On-site web analytics measures the performance of your website in a commercial context. This data is typically compared against key performance indicators for performance, and used to improve a web site or marketing campaign’s audience response.

Off-site web analytics technologies

Many different vendors provide on-site web analytics software and services. There are two main technological approaches to collecting the data. The first method, log file analysis, reads the logfiles in which the web server records all its transactions. The second method, page tagging, uses JavaScript or images on each page to notify a third-party server when a page is rendered by a web browser. Both collect data that can be processed to produce web traffic reports.

Web server logfile analysis

Web servers record some of their transactions in a logfile. It was soon realized that these logfiles could be read by a program to provide data on the popularity of the website. Thus arose web log analysis software.

Page tagging

Concerns about the accuracy of logfile analysis in the presence of caching, and the desire to be able to perform web analytics as an outsourced service, led to the second data collection method, page tagging or ‘Web bugs‘.

Click analytics is a special type of web analytics that gives special attention to clicks.

Commonly, click analytics focuses on on-site analytics. An editor of a web site uses click analytics to determine the performance of his or her particular site, with regards to where the users of the site are clicking.

Also, click analytics may happen real-time or “unreal”-time, depending on the type of information sought. Typically, front-page editors on high-traffic news media sites will want to monitor their pages in real-time, to optimize the content. Editors, designers or other types of stakeholders may analyze clicks on a wider time frame to aid them assess performance of writers, design elements or advertisements etc.

Data about clicks may be gathered in at least two ways. Ideally, a click is “logged” when it occurs, and this method requires some functionality that picks up relevant information when the event occurs. Alternatively, one may institute the assumption that a page view is a result of a click, and therefore log a simulated click that led to that page view.

Discusion Board 5

To effectively use a database, a continuing educator must first collect appropriate

data on seminar/ conference attendees. Basic data for the continuing engineering

educator’s database should include: (a) contact date/time, (b) contact person, (c)

contact person’s address (if different from attendee), (d) attendee prefix, (e) first name, middle initial, last name and suffix, (f) attendee title, (g) organization name, (h) mail stop/department, (i) city/state/zip, (j) telephone number, (k) fax number, (l) email address, and (m) name tag name. Additional data would include: (a) job function, (b) gender, (c) the names of respondents’ immediate supervisors, or name and title of person authorizing attendance, (d) key code information to monitor effectiveness of individual mailings and mailing list, (e) standard industrial classification (SIC) number (if a business firm), (f) company size in terms of number of employees, (g) secondary email address, and (h) source of the registration (brochure, catalog, space ad, personal letter, etc.).

 

A continuing educator will use the database information in a number of different

ways. First of all, confirmation forms and/or invoices will be sent to course registrants.

Next, name tags, rosters, name tents, and certificates will be printed from the database.

 

Third, an educator will use the database for future communications with the attendee.

 

After a course is over, an educator may send a thank-you letter to the participant

or to his/her superior. Later on, an educator will use the database to do a needs

assessment. Questionnaires will be sent to past attendees to find out other continuing education needs. Telephone surveys will be conducted to identify appropriate topics for programs. A participant’s superior or boss will be questioned to find out the kinds of courses that should be developed and presented in the future.

 

Last, a continuing educator will use the database to promote future courses.

Promotional literature or email may be sent to the past attendee or to his/her superior. Depending upon the targeted individual, appropriate appeals will be developed in promotional literature to encourage the recipient to participate in other upcoming continuing education courses.

 

While building a database, a continuing educator must also make sure that the

database is properly maintained. First, an educator must keep the database

unduplicated, i.e., an individual’s name should appear only one time on the database. To ensure a clean list, an educator should only add the names of participants who are not already on the master file. If the individual has already taken a course in the past, his/her name will already be in the database. When this individual takes another course, his/her record should be updated with the appropriate course and other information.

 

 

 

Last, a continuing educator should maintain a file of recommended names in the

database, typically, an educator might ask attendees to recommend colleagues that might be interested in attending future courses. These recommended names would, in turn, receive, again, a special letter plus the brochure invitation from the continuing engineering educator. In the letter, the educator would mention the name of the person who recommended that the recipient receive an invitation to attend an upcoming program. This type of personalization, again, will help encourage reluctant individuals to attend.

 

In conclusion, an educator will find that databases will become increasingly

important to the success of the continuing engineering education program. Back in the 1970s, educators found that mass marketing worked since attendees were usually content to accept seminars and conferences that met at least a few of their needs. Then the 1980s brought the computer revolution, and continuing educators began to segment markets so they could more accurately match educational needs and seminar/conference programs. In the 1990s, the trend continued toward niche marketing as educators grouped individuals having a commonality of interest into market niches and then develop customized programs around these shared needs. In the 2000s, continuing educators will have to continue these trends and place even more emphasis on “one-to-one marketing,” on building long-term relationships with the participant, and on developing customized seminars and conferences for the learner. Databases are the vehicle for helping the continuing engineering educator achieve this goal.

JE34

“The presented piece of writing is a good example how the academic paper should be written. However, the text can’t be used as a part of your own and submitted to your professor – it will be considered as plagiarism.

But you can order it from our service and receive complete high-quality custom paper.  Our service offers Science  essay sample that was written by professional writer. If you like one, you have an opportunity to buy a similar paper. Any of the academic papers will be written from scratch, according to all customers’ specifications, expectations and highest standards.”

order-now-new              chat-new (1)