skip to content
Making software : what really works, and why we believe it Preview this item
ClosePreview this item
Checking...

Making software : what really works, and why we believe it

Author: Andrew Oram; Greg Wilson
Publisher: Farnham; Cambridge : O'Reilly, ©2011.
Edition/Format:   Book : English : 1st edView all editions and formats
Database:WorldCat
Summary:
No doubt, you've heard many claims about how some tool, technology, or practice improves software development. But which claims are verifiable? In this book, leading thinkers offer essays that uncover the truth and unmask myths commonly held among the software development community.
Rating:

(not yet rated) 0 with reviews - Be the first.

Subjects
More like this

 

Find a copy in the library

&AllPage.SpinnerRetrieving; Finding libraries that hold this item...

Details

Genre/Form: 0 Gesamtdarstellung
Document Type: Book
All Authors / Contributors: Andrew Oram; Greg Wilson
ISBN: 9780596808327 0596808321
OCLC Number: 648096823
Notes: Includes index.
Description: xv, 602 p. : ill. ; 24 cm.
Contents: Machine generated contents note: pt. One General Principles of Searching for and Using Evidence --
1. The Quest for Convincing Evidence / Forrest Shull --
In the Beginning --
The State of Evidence Today --
Change We Can Believe In --
The Effect of Context --
Looking Toward the Future --
2. Credibility, or Why Should I Insist on Being Convinced? / Marian Petre --
How Evidence Turns Up in Software Engineering --
Credibility and Relevance --
Aggregating Evidence --
Types of Evidence and Their Strengths and Weaknesses --
Society, Culture, Software Engineering, and You --
Acknowledgments --
3. What We Can Learn From Systematic Reviews / Barbara Kitchenham --
An Overview of Systematic Reviews --
The Strengths and Weaknesses of Systematic Reviews --
Systematic Reviews in Software Engineering --
Conclusion --
4. Understanding Software Engineering Through Qualitative Methods / Andrew Ko --
What Are Qualitative Methods? --
Reading Qualitative Research --
Using Qualitative Methods in Practice --
Generalizing from Qualitative Results --
Qualitative Methods Are Systematic 5. Learning Through Application: The Maturing of the QIP in the SEL / Victor R. Basili --
What Makes Software Engineering Uniquely Hard to Research --
A Realistic Approach to Empirical Research --
The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research --
The Quality Improvement Paradigm --
Conclusion --
6. Personality, Intelligence, and Expertise: Impacts on Software Development / Jo E. Hannay --
How to Recognize Good Programmers --
Individual or Environment --
Concluding Remarks --
7. Why is it so Hard to Learn to Program? / Mark Guzdial --
Do Students Have Difficulty Learning to Program? --
What Do People Understand Naturally About Programming? --
Making the Tools Better by Shifting to Visual Programming --
Contextualizing for Motivation --
Conclusion: A Fledgling Field --
8. Beyond Lines of Code: Do We Need More Complexity Metrics? / Ahmed E. Hassan --
Surveying Software --
Measuring the Source Code --
A Sample Measurement --
Statistical Analysis --
Some Comments on the Statistical Methodology --
So Do We Need More Complexity Metrics? --
pt. Two Specific Topics in Software Engineering 9. An Automated Fault Prediction System / Thomas J. Ostrand --
Fault Distribution --
Characteristics of Faulty Files --
Overview of the Prediction Model --
Replication and Variations of the Prediction Model --
Building a Tool --
The Warning Label --
10. Architecting: How Much and When? / Barry Boehm --
Does the Cost of Fixing Software Increase over the Project Life Cycle? --
How Much Architecting Is Enough? --
Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting --
So How Much Architecting Is Enough? --
Does the Architecting Need to Be Done Up Front? --
Conclusions --
11. Conway's Corollary / Christian Bird --
Conway's Law --
Coordination, Congruence, and Productivity --
Organizational Complexity Within Microsoft --
Chapels in the Bazaar of Open Source Software --
Conclusions --
12. How Effective Is Test-Driven Development? / Forrest Shull --
The TDD Pill --
What Is It? --
Summary of Clinical TDD Trials --
The Effectiveness of TDD --
Enforcing Correct TDD Dosage in Trials --
Cautions and Side Effects --
Conclusions --
Acknowledgments 13. Why Aren't More Women in Computer Science? / Wendy M. Williams --
Why So Few Women? --
Should We Care? --
Conclusion --
14. Two Comparisons of Programming Languages / Lutz Prechelt --
A Language Shoot-Out over a Peculiar Search Algorithm --
Plat_Forms: Web Development Technologies and Cultures --
So What? --
15. Quality Wars: Open Source Versus Proprietary Software / Diomidis Spinellis --
Past Skirmishes --
The Battlefield --
Into the Battle --
Outcome and Aftermath --
Acknowledgments and Disclosure of Interest --
16. Code Talkers / Robert DeLine --
A Day in the Life of a Programmer --
What is All This Talk About? --
A Model for Thinking About Communication --
17. Pair Programming / Laurie Williams --
A History of Pair Programming --
Pair Programming in an Industrial Setting --
Pair Programming in an Educational Setting --
Distributed Pair Programming --
Challenges --
Lessons Learned --
Acknowledgments --
18. Modern Code Review / Jason Cohen --
Common Sense --
A Developer Does a Little Code Review --
Group Dynamics --
Conclusion 19. A Communal Workshop or Doors That Close? / Jorge Aranda --
Doors That Close --
A Communal Workshop --
Work Patterns --
One More Thing... --
20. Identifying and Managing Dependencies in Global Software Development / Marcelo Cataldo --
Why Is Coordination a Challenge in GSD? --
Dependencies and Their Socio-Technical Duality --
From Research to Practice --
Future Directions --
21. How Effective is Modularization? / Gail Murphy --
The Systems --
What Is a Change? --
What Is a Module? --
The Results --
Threats to Validity --
Summary --
22. The Evidence for Design Patterns / Walter Tichy --
Design Pattern Examples --
Why Might Design Patterns Work? --
The First Experiment: Testing Pattern Documentation --
The Second Experiment: Comparing Pattern Solutions to Simpler Ones --
The Third Experiment: Patterns in Team Communication --
Lessons Learned --
Conclusions --
Acknowledgments --
23. Evidence-Based Failure Prediction / Thomas Ball --
Introduction --
Code Coverage --
Code Churn --
Code Complexity --
Code Dependencies --
People and Organizational Measures --
Integrated Approach for Prediction of Failures --
Summary --
Acknowledgments 24. The Art of Collecting Bug Reports / Thomas Zimmermann --
Good and Bad Bug Reports --
What Makes a Good Bug Report? --
Survey Results --
Evidence for an Information Mismatch --
Problems with Bug Reports --
The Value of Duplicate Bug Reports --
Not All Bug Reports Get Fixed --
Conclusions --
Acknowledgments --
25. Where Do Most Software Flaws Come From? / Dewayne Perry --
Studying Software Flaws --
Context of the Study --
Phase 1 Overall Survey --
Phase 2 Design/Code Fault Survey --
What Should You Believe About These Results? --
What Have We Learned? --
Acknowledgments --
26. Novice Professionals: Recent Graduates in a First Software Engineering Job / Beth Simon --
Study Methodology --
Software Development Task --
Strengths and Weaknesses of Novice Software Developers --
Reflections --
Misconceptions That Hinder Learning --
Reflecting on Pedagogy --
Implications for Change --
27. Mining Your Own Evidence / Andreas Zeller --
What is There to Mine? --
Designing a Study --
A Mining Primer --
Where to Go from Here --
Acknowledgments --
28. Copy-Paste as a Principled Engineering Tool / Cory Kapser --
An Example of Code Cloning --
Detecting Clones in Software --
Investigating the Practice of Code Cloning --
Our Study --
Conclusions 29. How Usable Are Your Apis? / Steven Clarke --
Why Is It Important to Study API Usability? --
First Attempts at Studying API Usability --
If At First You Don't Succeed... --
Adapting to Different Work Styles --
Conclusion --
30. What Does 10X Mean? Measuring Variations in Programmer Productivity / Steve McConnell --
Individual Productivity Variation in Software Development --
Issues in Measuring Productivity of Individual Programmers --
Team Productivity Variation in Software Development.
Responsibility: edited by Andy Oram and Greg Wilson.

Abstract:

In this book, leading thinkers such as Steve McConnell, Barry Boehm, and Barbara Kitchenham offer essays that uncover the truth and unmask myths commonly held among the software development  Read more...

Reviews

User-contributed reviews
Retrieving GoodReads reviews...
Retrieving DOGObooks reviews...

Tags

Be the first.

Similar Items

Related Subjects:(2)

User lists with this item (2)

Confirm this request

You may have already requested this item. Please select Ok if you would like to proceed with this request anyway.

Linked Data


<http://www.worldcat.org/oclc/648096823>
library:oclcnum"648096823"
library:placeOfPublication
library:placeOfPublication
library:placeOfPublication
owl:sameAs<info:oclcnum/648096823>
rdf:typeschema:Book
schema:about
<http://id.worldcat.org/fast/872537>
rdf:typeschema:Intangible
schema:name"Computer software--Development"@en
schema:name"Computer software--Development."@en
schema:about
schema:about
schema:about
schema:bookEdition"1st ed."
schema:copyrightYear"2011"
schema:datePublished"2011"
schema:editor
schema:editor
schema:exampleOfWork<http://worldcat.org/entity/work/id/1074069875>
schema:inLanguage"en"
schema:name"Making software : what really works, and why we believe it"@en
schema:numberOfPages"602"
schema:publisher
schema:url
schema:workExample
umbel:isLike<http://bnb.data.bl.uk/id/resource/GBB083664>

Content-negotiable representations

Close Window

Please sign in to WorldCat 

Don't have an account? You can easily create a free account.