Reputation: 3168
I have dynamic text drawn into a custom UIImageView. Text can contain combinations of characters like :-) or ;-), which I'd like to replace with PNG images.
I apologize for bunch of codes below.
Code that creates CTRunDelegate follows:
CTRunDelegateCallbacks callbacks;
callbacks.version = kCTRunDelegateVersion1;
callbacks.dealloc = emoticonDeallocationCallback;
callbacks.getAscent = emoticonGetAscentCallback;
callbacks.getDescent = emoticonGetDescentCallback;
callbacks.getWidth = emoticonGetWidthCallback;
// Functions: emoticonDeallocationCallback, emoticonGetAscentCallback, emoticonGetDescentCallback, emoticonGetWidthCallback are properly defined callback functions
CTRunDelegateRef ctrun_delegate = CTRunDelegateCreate(&callbacks, self);
// self is what delegate will be using as void*refCon parameter
Code for creating attributed string is:
NSMutableAttributedString* attString = [[NSMutableAttributedString alloc] initWithString:self.data attributes:attrs];
// self.data is string containing text
// attrs is just setting for font type and color
I've then added CTRunDelegate to this string:
CFAttributedStringSetAttribute((CFMutableAttributedStringRef)attString, range, kCTRunDelegateAttributeName, ctrun_delegate);
// where range is for one single emoticon location in text (eg. location=5, length = 2)
// ctrun_delegate is previously created delegate for certain type of emoticon
Callback functions are defined like:
void emoticonDeallocationCallback(void*refCon)
{
// dealloc code goes here
}
CGFloat emoticonGetAscentCallback(void * refCon)
{
return 10.0;
}
CGFloat emoticonGetDescentCallback(void * refCon)
{
return 4.0;
}
CGFloat emoticonGetWidthCallback(void * refCon)
{
return 30.0;
}
Now all this works fine - I get callback functions called, and I can see that width, ascent and descent affect how text before and after detected "emoticon char combo" is drawn.
Now I'd like to draw an image at the spot where this "hole" is made, however I can't find any documentation that can guide me how do I get pixel (or some other) coordinates in each callback.
Can anyone guide me how to read these?
Thanks in advance!
P.S.
As far as I've seen, callbacks are called when CTFramesetterCreateWithAttributedString is called. So basically there's no drawing going on yet. I couldn't find any example showing how to match emoticon location to a place in drawn text. Can it be done?
Upvotes: 1
Views: 577
Reputation: 3168
I've found a solution!
To recap: issue is to draw text using CoreText into UIImageView, and this text, aside from obvious font type and color formatting, needs to have parts of the text replaced with small images, inserted where replaced sub-text was (eg. :-) will become a smiley face).
Here's how:
1) Search provided string for all supported emoticons (eg. search for :-) substring)
NSRange found = [self.rawtext rangeOfString:emoticonString options:NSCaseInsensitiveSearch range:searchRange];
If occurrence found, store it in CFRange:
CFRange cf_found = CFRangeMake(found.location, found.length);
If you're searching for multiple different emoticons (eg. :) :-) ;-) ;) etc.), sort all found occurrences in ascending order of it's location.
2) Replace all emoticon substrings (eg. :-)) you will want to replace with an image, with an empty space. After this, you must also update found locations to match these new spaces. It's not as complicated as it sounds.
3) Use CTRunDelegateCreate for each emoticon to add callback to newly created string (the one that does not have :-) but [SPACE] instead).
4) Callback functions should obviously return correct emoticon width based on image size you will use.
5) As soon as you will execute CTFramesetterCreateWithAttributedString, these callbacks will be executed as well, giving framesetter data which will be later used in creating glyphs for drawing in given frame path.
6) Now comes the part I missed: once you create frame for framesetter using CTFramesetterCreateFrame, cycle through all found emoticons and do following:
Get num of lines from frame and get origin of the first line:
CFArrayRef lines = CTFrameGetLines(frame);
int linenum = CFArrayGetCount(lines);
CGPoint origins[linenum];
CTFrameGetLineOrigins(frame, CFRangeMake(0, linenum), origins);
Cycle through all lines, for each emoticon, looking for glyph that contains it (based on the range.location for each emoticon, and number of characters in each glyph):
(Inspiration came from here: CTRunGetImageBounds returning inaccurate results)
int eloc = emoticon.range.location; // emoticon's location in text
for( int i = 0; i<linenum; i++ )
{
CTLineRef line = (CTLineRef)CFArrayGetValueAtIndex(lines, i);
CFArrayRef gruns = CTLineGetGlyphRuns(line);
int grunnum = CFArrayGetCount(gruns);
for( int j = 0; j<grunnum; j++ )
{
CTRunRef grun = (CTRunRef) CFArrayGetValueAtIndex(gruns, j);
int glyphnum = CTRunGetGlyphCount(grun);
if( eloc > glyphnum )
{
eloc -= glyphnum;
}
else
{
CFRange runRange = CTRunGetStringRange(grun);
CGRect runBounds;
CGFloat ascent,descent;
runBounds.size.width = CTRunGetTypographicBounds(grun, CFRangeMake(0, 0), &ascent, &descent, NULL);
runBounds.size.height = ascent + descent;
CGFloat xOffset = CTLineGetOffsetForStringIndex(line, runRange.location, NULL);
runBounds.origin.x = origins[i].x + xOffset;
runBounds.origin.y = origins[i].y;
runBounds.origin.y -= descent;
emoticon.location = CGPointMake(runBounds.origin.x + runBounds.size.width, runBounds.origin.y);
emoticon.size = CGPointMake([emoticon EmoticonWidth] ,runBounds.size.height);
break;
}
}
}
Please do not take this code as copy-paste-and-will-work as I had to strip lots of other stuff - so this is just to explain what I did, not for you to use it as is.
7) Finally I can create context and draw both text and emoticons at correct place:
if(currentContext)
{
CGContextSaveGState(currentContext);
{
CGContextSetTextMatrix(currentContext, CGAffineTransformIdentity);
CTFrameDraw(frame, currentContext);
}
CGContextRestoreGState(currentContext);
if( foundEmoticons != nil )
{
for( FoundEmoticon *emoticon in foundEmoticons )
{
[emoticon DrawInContext:currentContext];
}
}
}
And function that draws emoticon (I just made it to draw it's border and pivot point):
-(void) DrawInContext:(CGContext*)currentContext
{
CGFloat R = round(10.0 * [self randomFloat] ) * 0.1;
CGFloat G = round(10.0 * [self randomFloat] ) * 0.1;
CGFloat B = round(10.0 * [self randomFloat] ) * 0.1;
CGContextSetRGBStrokeColor(currentContext,R,G,B,1.0);
CGFloat pivotSize = 8.0;
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, self.location.x, self.location.y - pivotSize);
CGContextAddLineToPoint(currentContext, self.location.x, self.location.y + pivotSize);
CGContextMoveToPoint(currentContext, self.location.x - pivotSize, self.location.y);
CGContextAddLineToPoint(currentContext, self.location.x + pivotSize, self.location.y);
CGContextDrawPath(currentContext, kCGPathStroke);
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, self.location.x, self.location.y);
CGContextAddLineToPoint(currentContext, self.location.x + self.size.x, self.location.y);
CGContextAddLineToPoint(currentContext, self.location.x + self.size.x, self.location.y + self.size.y);
CGContextAddLineToPoint(currentContext, self.location.x, self.location.y + self.size.y);
CGContextAddLineToPoint(currentContext, self.location.x, self.location.y);
CGContextDrawPath(currentContext, kCGPathStroke);
}
Resulting image: http://i57.tinypic.com/rigis5.png
:-)))
P.S.
Here is result image with multiple lines: http://i61.tinypic.com/2pyce83.png
P.P.S.
Here is result image with multiple lines and with PNG image for emoticon: http://i61.tinypic.com/23ixr1y.png
Upvotes: 3
Reputation: 33359
Are you drawing the text in a UITextView
object? If so, then you can ask it's layout manager where the emoticon is drawn, specifically the -[NSLayoutManager boundingRectForGlyphRange:inTextContainer:
method (also grab the text container of the text view).
Note that it expects the glyph range, not a character range. Multiple characters can make up a single glyph, so you will need to convert between them. Again, NSLayoutManager has methods to convert between character ranges and glyph ranges.
Alternatively, if you're not drawing inside a text view, you should create your own layout manager and text container, so you can do the same.
A text container describes a region on the screen where text will be drawn, typically it's a rectangle but it can be any shape:
A layout manager figures out how to fit the text within whatever shape the text container describes.
Which brings me to the other approach you could take. You can modify the text container object, adding a blank space where no text can be rendered, and put a UIImageView
inside that blank space. Use the layout manager to figure out where the blank spaces should be.
Under iOS 7 and later, you can do this by adding "exclusion paths" to the text container, which is just an array of paths (rectangles probably) where each image is. For earlier versions of iOS you need to subclass NSTextContainer and override lineFragmentRectForProposedRect:atIndex:writingDirection:remainingRect:
.
Upvotes: 1