Deni M
Deni M

Reputation: 1

Identify unique string values among lists of elements

I have an unbalanced, large dataset where each observation can take multiple string values, each stored in a separate variable:

obs    year   var1    var2    var3    newval  

1      1990   str1    str2    str3     3   

1      1991   str1    str4    str5     2  

2      1990   str3    str4             2  

2      1991   str4    str5             1  

2      1993   str3    str5             0 

2      1994   str7                     1

At each point in time and for each observation I need to count whether the string value(s) are "new". What this means is that they do not show up among the values taken by the observation in previous years.

How should I approach this problem in Stata?

Thank you.

Upvotes: 0

Views: 1670

Answers (2)

Nick Cox
Nick Cox

Reputation: 37208

This question was also posted on Statalist. Here's my answer. I tend not to go for merges unless the problem starts with two or more files.

clear
input obs     yr   str4 var1 str4  var2 str4   var3
1        90   str1    str2    str3
1        91    str1    str4    str5
2        90    str3    str4
2        91    str4    str5
2        93    str3    str5
2        94    str7
end
reshape long var , i(obs yr) j(which)
bysort obs var (yr) : gen new = _n == 1 & !missing(var)
bysort obs yr : replace new = sum(new)
by obs yr : replace new = new[_N]
reshape wide var, i(obs yr) j(which)

(MORE) Further comments focused largely on efficiency, meaning here speed rather than space. (Storage space could be biting the poster.)

Without a restructure, here using reshape, the problem is a triple loop: over identifiers, over observations for each identifier and over variables. Possibly the two outer loops can be collapsed to one. But an explicit loop over observations is usually slow in Stata.

With the restructuring solutions proposed by Dimitriy and myself, by: operations go straight to compiled code and are relatively fast: reshape is interpreted code and entails file manipulations, so can be slow. On the other hand reshape can be fast to write down with some experience, and it really is worth acquiring the fluency with reshape which comes with experience. In addition to the help for reshape and the manual entry, see the FAQ on reshape I wrote at http://www.stata.com/support/faqs/data-management/problems-with-reshape/

Another consideration is what else you want to do with this kind of dataset. If there are going to be other problems of similar character, they will usually be easier with a long structure as produced by reshape, so keeping that structure will be a good idea.

Upvotes: 1

dimitriy
dimitriy

Reputation: 9460

There's probably a more elegant way to to do this.

The main idea is that I first reshape the data and count the occurrence of each string sequentially. Reshaping makes this much easier. Then I am going to aggregate with collapse, but only count the first instance that each string appears. Then I will rejoin to your original data.

#delimit;

preserve;
    tempfile newval;

    reshape long var, i(obs year) j(s); // stack all the vars on top of each other
    bys obs var (year): gen n=_n if !missing(var); // number the appearance of each string in chronological order
    replace n=0 if n>1 & !missing(n); // only count the first instance

    collapse (sum) mynewval=n, by(obs year); // add up the counts
    save `newval';
restore;

merge 1:1 obs year using `newval', nogen;

compare newval mynewval;

Upvotes: 1

Related Questions