Friday, February 06, 2009

Adding connection pool mechanism to OmniThreadLibrary

I have a problem.

I have this thread pool, which needs to be enhanced a little. And I don’t know how to do it.

Well, actually I know. I have at least three approaches. I just can’t tell which one is the best :(

I’m talking about the OmniThreadLibrary – I know you guessed that already. The problem is the thread pool in OTL doesn’t allow for per-thread resource initialization and that’s something that you need when you’re implementing a connection pool (more background info here). I started adding this functionality but soon found out that I don’t have a good idea on how to implement it. Oh, I have few ideas, they are just not very good :(

Current

At the moment, OTL thread pool functionality is exposed through an interface. This interface is pretty high-level and doesn’t allow the programmer to mess with the underlying thread management. All thread information is hidden in the implementation section. There’s a notification event that’s triggered when pool thread is created or destroyed, but it is only a notification and is triggered asynchronously (and possibly with a delay).

In short:

type
TOTPWorkerThread = class(TThread)
end;

TOmniThreadPool = class(TInterfacedObject, IOmniThreadPool)
end;

Event handlers

The first idea was to add OnThreadInitialization/OnThreadCleanup to the IOmniThreadPool. [Actually something similar already exists - OnWorkerThreadCreated_Asy and OnWorkerThreadDestroyed_Asy – but those events are part of the previous implementation and will be removed very soon.] Those two events would receive a TThread parameter and do the proper initialization there.

There are some big problems though. Let’s say you’ll be implementing database connection pool. You’ll have to open a database connection in OnThreadInitialization. Where would you store that info then? In an external structure, indexed by the TThread? Ugly! Even worse – how would you access the database info from the task that will be executing in the thread pool? By accessing that same structure? Eugh!

Rejected.

Thread subclassing

A better idea is to implement a subclassed thread class in your own code and then tell the thread pool to use this thread class when creating new thread object. You’d then manage database connection in overridden Initialize/Cleanup methods.

type
TDBConnectionPoolThread = class(TOTPWorkerThread)
strict private
FDBConnection: TDBConnectionInfo;
protected
function Initialize: boolean; override;
procedure Cleanup; override;
end;

GlobalOmniThreadPool.ThreadClass := TDBConnectionPoolThread;

Looks much better but there’s again a problem – I’d have to expose TOTPWorkerThread object in the interface section and that’s just plain ugly. Worker thread mechanism should be hidden. Only few people would ever be interested in it.

Thread data subclassing

An even better idea is to add an empty

TOTPWorkerThreadData = class
end;

definition to the interface section of the thread pool unit. IOmniThreadPool would contain a property ThreadDataClass which would point to this definition. And each worker thread would create/destroy an instance of this class in its Execute method.

You’d add database management as

type
TDBConnectionPoolThreadData = class(TOTPWorkerThreadData)
strict private
FDBConnection: TDBConnectionInfo;
protected
constructor Create;
destructor Destroy; override;
end;

GlobalOmniThreadPool.ThreadDataClass := TDBConnectionPoolThreadData;

[Maybe the constructor has to be virtual here? I never know until I try.]

There’s still a question of accessing this information from the task (and it goes the same for the previous attempt – I just skipped the issue then). I’d have to extend the IOmniTask interface with a method to access per-thread data.

Thread data with interfaces

While writing this mind dump a new idea crossed my mind – what if thread data would be implemented as an interface, without the need for subclassing the thread or thread data? In a way it is a first idea just reimplemented to remove all its problems.

Task interface would be extended with thread data access definitions, approximately like this:

type
IOtlThreadData = interface
end;

IOtlTask = interface
property ThreadData: IOtlThreadData;
end;

Thread pool would get a property containing a factory method.

type
TCreateThreadDataProc = function: IOtlThreadData;

IOmniThreadPool = interface
property ThreadDataFactory: TCreateThreadDataProc;
end;

This factory method would be called when thread is created to initialize thread data. Each task would get assigned that same interface into its ThreadData property just before starting its execution in a selected thread. Task would then access ThreadData property to retrieve this information.

In the database connection pool scenario, you’d have to write a connection interface, object and factory.

type
IDBConnectionPoolThreadData = interface(IOtlThreadData)
property ConnectionInfo: TDBConnectionInfo read GetConnectionInfo;
end;

TDBConnectionPoolThreadData = class(TInterfacedObject, IDBConnectionPoolThreadData )
strict private
FDBConnection: TDBConnectionInfo;
protected
constructor Create;
destructor Destroy; override;
end;

function CreateConnectionPoolThreadData: IDBCOnnectionPoolThreadData;
begin
Result := TDBConnectionPoolThreadData.Create;
end;

GlobalThreadPool.ThreadDataFactory := CreateConnectionPoolThreadData;

This approach requires slightly more work from the programmer but I like it most as it somehow seems the cleanest of them all (plus it is implemented with interfaces which is pretty much the approach used in all OTL code).

So, dear reader, what do you think? If you have better idea, or see a big problem with any of those implementations that I didn’t think of, please do tell in the comments!

Labels: , , , ,

Saturday, December 20, 2008

Christmas mystery

We were tracking down quite an interesting problem in the past few days …

We have this service that accepts connections from multiple computers and reports some status back. And it does that via SOAP. And the SOAP implementation is our internal.

So far so good – we are using this approach in quite some programs deployed in many places. Everything is working well.

Everything, except one installation.

That customer was reporting weird things. Mysterious things. Sometimes the client on one computer would for a moment display a same data as another computer. And then the display would flip back to the correct data. Mysterious.

As expected, the guy who wrote the service pointed his finger to the guy that wrote the SOAP layer (me). And, as expected, I did the reverse. We were pretty sure that the problem was caused by the buggy code written by the other guy. We were both wrong.

Of course we first spent about a working day logging various parts of the service and SOAP server. You know the drill – add logging, compile, connect VPN, upload the source to the customer, start two clients at two machines on the remote site via RDP, wait for the problem, download logs, analyze. Booooring. At the end, we were none smarter. We only knew that the server always returns correct data to all clients.

Then we switched our attention to the client. Who’d say, the client always received the correct data. Except that it sometimes displayed wrong data. But that data was never received via the SOAP layer. As I said before – mystery.

Of course, once we got to that point we knew that the problem lies inside the client software so we only have to dig in that direction. And, of course, we got the answer. And boy was it a surprising one!

You see, our client is somewhat baroque. Layers of layers of code that were developed over the years. It’s a fate of all successful applications and this one is quite successful, at least in the vertical market we are working in. It is therefore not very surprising that the client is using temporary files in a somewhat strange arrangement. Instead of creating temporary files with unique names, it creates a temporary folder inside the %temp% and stores files there. This folder is guaranteed to be unique on the system as it uses a global counter as a part of the folder name. IOW, first client on the computer stores data in %temp%\data_1, second in %temp%\data_2. So the client on the first computer stored remote data (returned from the service) in %temp%\data_1\list.txt and the client on the second computer used %temp%\data_1\list.txt. They were both using ‘data_1’ subfolder as they were both the first (and only) client instance on that machine. A recipe for disaster? Not really as the %temp% folder is local to the computer.

Except that it was not.

Somebody at the customer’s site got a brilliant idea and configured temp folders for all domain clients to point to the domain controller. Don’t know who and surely don’t know why, but as the result %temp% pointed to the same folder (on the domain controller computer) on all client computers in the domain. So the first client downloaded its data to the %temp% (on the domain server), second client downloaded its data to the same location and then the first client displayed that data – data that was already modified and which belonged to the second client. A typical race condition if I ever saw one.

The solution is, of course, to always use GetTempFileName. But this time we surely (re)learnt it in an interesting way.

Labels:

Saturday, September 06, 2008

Interfacing with external programs

Say you have a program. A popular program (at least in some circles) that other people want to write add-ons for. Your program does some job and in some processing steps you want to be able to execute some external code and then proceed according to the result returned by that code. Similar to the way CGI code is executed when a HTTP server processes a HTTP request.

The other day I was trying to enumerate all possible ways to do that. I found following solutions:

  • Running external program. Data can be passed in program’s standard input and read from its standard output – just like when CGI program is launched from the HTTP server. Simple, stable (it is simple to protect against malfunctioning add-ons), but quite slow.
  • Third-party DLL. Relatively simple, very fast, but can seriously destabilize the whole product. Complicated to upgrade the DLL (must shut down main application to upgrade add-on).
  • [D]COM[+]. Not my bag of Swedish … sorry, wrong movie. Definitely not the way I’d like to pursue. Unstable. Leads to problems that nobody seems to be able to troubleshoot.
  • Windows messages. Messy. Plus the main program runs as a service while add-on maybe wouldn’t.
  • TCP. Implement add-on as a text-processing TCP/IP service (another HTTP server, if we continue the CGI analogy). Interesting idea, but not very simple to implement. Fast when both are running on the same machine. Flexible – each can be shutdown and upgraded independently; processing can be distributed over several computers. Complicated to configure when multiple add-ons are installed (each must be configured to a different port). Firewall and antivirus software may cause problems.
  • Drop folder. Main app drops a file into some folder and waits for the result file. Clumsy, possibly faster than the external program solution (add-on can be always running), simple to implement and very stable.
  • Message queues (as in the MSMQ). Interesting but possibly too complicated for most customers to install and manage.

And now to the main point of my ramblings. What did I miss? Are there more possibilities? If you have any idea how to approach my problem from a different direction, leave me a comment. Don’t mention SOAP, BTW, it is implicitly included in the “add-on as a TCP server” solution.

And thanks!

Labels: ,

Friday, March 21, 2008

Walking the key-value container

The recent discussion in comments to my latest articles (Fun with enumerators - Walking the (integer) list, On bad examples and smelly code) caused a shift in my perspective. I was always treating my TGpIntegerObjectList and TGpInt64ObjectList as lists with some baggage attached, but in practice I'm almost never using them that way. Most of the time I treat them as a background storage for key-value pairs.

What's the difference? Most of the time, I don't care about item indices. When I use my lists as containers, I never need to know where in the list some (Key, Value) pair is stored. OK, almost never. When I'm deleting from the list, I sometimes use the index, just for the performance purposes. And when I access the Value part, I have to find the index with IndexOf and then use it to reference the Objects property. There are probably other cases too - but that's something that you just have to take into account if you want to use a list as a storage media.

From time to time I'm extending lists in my GpLists unit with small wrappers that help me not to use list indices in some specific situation. Today, I used the new Walk enumerator in some code and asked myself: "Why does it have to return list index? Why couldn't it return a (Key, Value) pair?"

Good question. Why couldn't it? It turns out that it could.

A little enumerator that could

Let's set up some assumptions first. Assume that I have this pointless list.

il := TGpIntegerObjectList.Create;
il.AddObject(1, TGpString.Create('one'));
il.AddObject(2, TGpString.Create('two'));
il.AddObject(3, TGpString.Create('three'));
Further assume that I want to walk over it and display both number and text for each item. I can do this with a standard loop.
for idx := 0 to il.Count - 1 do
Log('%d: %s', [il[idx], TGpString(il.Objects[idx]).Value]);
Or with my index-returning walker.
for idx in il.Walk do
Log('%d: %s', [il[idx], TGpString(il.Objects[idx]).Value]);
But what I'd really like to do is.
for kv in il.WalkKV do
Log('%d: %s', [kv.Key, TGpString(kv.Value).Value]);
Or even better.
for kv in il.WalkKV do
Log('%d: %s', [kv.Key, kv.StrValue]);

In other words, I want to return not a single item, but a pair of items from the enumerator. Of course, Delphi expressions can only return a single result and not a tuple so we have to provide a wrapper for enumerated (Key, Value) pairs. We have to pack them in a record, class or interface.


Getting all self-destructive


My first idea was to return an interface to a key-value object from the enumerator.

  IGpKeyValue = interface
function GetKey: int64;
function GetValue: TObject;
property Key: int64 read GetKey;
property Value: TObject read GetValue;
end; { IGpKeyValue }

TGpKeyValue = class(TInterfacedObject, IGpKeyValue)
private
kvKey: int64;
kvValue: TObject;
protected
function GetKey: int64;
function GetValue: TObject;
public
constructor Create(key: int64; value: TObject);
property Key: int64 read GetKey;
property Value: TObject read GetValue;
end; { TGpKeyValue }

TGpIntegerObjectListWalkKVEnumerator = class
//...
function GetCurrent: IGpKeyValue;
property Current: IGpKeyValue read GetCurrent;
end; { TGpIntegerObjectListWalkKVEnumerator }

function TGpIntegerObjectListWalkKVEnumerator.GetCurrent: IGpKeyValue;
var
idx: integer;
begin
idx := wkeListEnumerator.GetCurrent;
Result := TGpKeyValue.Create(wkeListEnumerator.List[idx],
TGpIntegerObjectList(wkeListEnumerator.List).Objects[idx]);
end; { TGpIntegerObjectListWalkKVEnumerator.GetCurrent }

That surely works fine, but guess what - it's incredibly slow. I wouldn't expect anything else - after all an object is allocated for every enumerated value, plus all that complications with interface reference counting...


I did some testing, of course. Thousand iterations over a list with 10.000 elements. Results are quite interesting. 5 milliseconds for a standard for loop. 50 milliseconds for my Walk enumerator. 5 seconds for interface-based WalkKV. Ouch! Let's return to the drawing board...


One allocation is enough


My next idea was to return not an interface, but an object. When you return an object, you actually return a pointer to the object data, which is quite fast. It would not help much if I would recreate this object every time the GetCurrent is called, but luckily there is no real reason to do that. I can create the object when enumerator is created and destroy it when enumerator is destroyed.

  TGpKeyValue = class
private
kvKey : int64;
kvValue: TObject;
public
property Key: int64 read kvKey write kvKey;
property Value: TObject read kvValue write kvValue;
end; { TGpKeyValue }

TGpIntegerObjectListWalkKVEnumerator = class
private
wkeCurrentKV: TGpKeyValue;
wkeListEnumerator: TGpIntegerListWalkEnumerator;
public
constructor Create(aList: TGpIntegerObjectList; idxFrom, idxTo: integer);
destructor Destroy; override;
//...
function GetCurrent: TGpKeyValue;
property Current: TGpKeyValue read GetCurrent;
end; { TGpIntegerObjectListWalkKVEnumerator }

constructor TGpIntegerObjectListWalkKVEnumerator.Create(aList: TGpIntegerObjectList;
idxFrom, idxTo: integer);
begin
inherited Create;
wkeListEnumerator := TGpIntegerListWalkEnumerator.Create(aList, idxFrom, idxTo);
wkeCurrentKV := TGpKeyValue.Create;
end; { TGpIntegerObjectListWalkKVEnumerator.Create }

destructor TGpIntegerObjectListWalkKVEnumerator.Destroy;
begin
FreeAndNil(wkeCurrentKV);
FreeAndNil(wkeListEnumerator);
inherited;
end; { TGpIntegerObjectListWalkKVEnumerator.Destroy }

function TGpIntegerObjectListWalkKVEnumerator.GetCurrent: TGpKeyValue;
var
idx: integer;
begin
idx := wkeListEnumerator.GetCurrent;
wkeCurrentKV.Key := wkeListEnumerator.List[idx];
wkeCurrentKV.Value := TGpIntegerObjectList(wkeListEnumerator.List).Objects[idx];
Result := wkeCurrentKV;
end; { TGpIntegerObjectListWalkKVEnumerator.GetCurrent }

BTW, you can see another trick in this implementation - enumeration by delegation. I'm reusing my Walk enumerator to do the actual walking.


That works much faster than the interface-based version - 300 ms for my test case. It's still six times slower than the Walk enumerator, though, and it is not really obvious why the difference is so big. Leave that be, for a moment.


The third approach would be to use a record to store the current (Key, Value) pair. Then there is no allocation/deallocation at all, but resulting code is not faster. Record-based enumerator needs about 500 ms to run the test case.


This slowdown occurs because record is not returned as a pointer, but as a full copy. In the class-based scenario, GetCurrent returns a pointer to the TGpKeyValue object and that pointer is passed in a register. In the record version, GetCurrent returns not a pointer to the record, but the record itself - it copies full record to the stack so the caller can use this copy - and that is waaaay slower.


The speed difference


Let's return to that major speed difference between Walk and WalkKV. I looked at the code, but couldn't find any good reason. Then I turned to the CPU view and it was evident. The problem lied not in the enumerator, but in my poorly designed benchmarking code :(


You see, I was timing multiple repetitions of these three loops:

for idx := 1 to 10000 do 
;

for idx in il.Walk do
;

for kv in il.WalkKV do
;

Three empty loops, so I'm timing just the enumeration, yes? No!


First loop just runs from 1 to 10000. Trivial job and compiler will generate great code.


Second loop does the same, but with more overhead.


Third loop does much more than that. It also accesses il[] and il.Objects[] (inside the GetCurrent).


In reality, code inside the for statement in the first two cases would have to access il[] and il.Objects[] by itself. The code inside the third for statement has no need for that - it gets data neatly prepared in kv.Key and kv.Value.


I changed the loops to:

for idx := 0 to il.Count - 1 do begin
il[idx];
obj := il.Objects[idx];
end;

for idx in il.Walk do begin
il[idx];
obj := il.Objects[idx];
end;

for kv in il.WalkKV do begin
kv.Key;
obj := kv.Value;
end;

I'm just accessing and throwing the Items[]/Key information away and then I copy the Objects[]/Value information into the local variable. All loops are now running comparable jobs.


Results are now significantly different. Simple for loop needs 300 ms to run the test, Walk version needs 310 ms (really small difference) and WalkKV version needs 470 ms. It is still slower, but by less than the factor of two. If there would be a real code inside the for loop, the difference would be unnoticeable.


Morale? You should benchmark, but you should also check that you're benchmarking the right thing!


Syntactic sugar


The generic version of WalkKV only supports this kind of code:

for kv in il.WalkKV do
Log('%d: %s', [kv.Key, TGpString(kv.Value).Value]);

But there's a really neat trick to change this into

for kv in il.WalkKV do
Log('%d: %s', [kv.Key, kv.StrValue]);

without modifying the WalkKV itself. Can you guess it? Class helpers, of course.


You just have to declare a simple helper in the WalkKV consumer (no need to change the WalkKV itself) and you can use the StrValue instead of TGpString(kv.Value).Value.

  TGpKeyStrValue = class helper for TGpKeyValue
public
function StrValue: string; inline;
end; { TGpKeyStrValue }

function TGpKeyStrValue.StrValue: string;
begin
Result := TGpString(Self.Value).Value;
end;

Conclusion: class helpers are great. Key-value walker is useful. Enumerator-induced performance loss is negligible. Enumerators are fun.


---


I've tagged this and all previous enumerator-related articles with the enumerators label so you can now access them all at once via this simple link.

Labels: , , , , ,

Sunday, March 16, 2008

On bad examples and smelly code

I've received lots of criticism on my latest article on enumerators. For the convenience of my readers, here are some excerpts.

Removal loops should be iterated backwards to prevent index shuffling.

... while I can certainly see the usefulness in your Slice- and Walk-enumerators ... I don't think they're the best solution to the two examples you quoted.

Simplicity does not justify an example that fails to work correctly. A bad example is still a bad example, no matter how simple.

I'd also like to echo the other comments w.r.t your initial example. Reversing the direction of the iteration is the simplest wasy to simplify the exemplar task, not invoking enumerators and the such like.

They can be summarized in one sentence: "Your examples suck." The main rationale for that reasoning seems to be the accepted way to delete stuff from a TList/TObjectList/TAnyList - start at the end and proceed backwards.

Well, I have two things to say. [I apologize for responding in a new post, but I feel that this is an interesting topic by itself.]

1. Yes, my examples are simplified. Terribly so. Maybe they are even simplistic. At least some part of my readers seems to think so. But that's just the way I like them.

I'm not trying to enforce my ideas on my readers. I'm only displaying them to the public. That's why I tend to minimize all my examples. I'll show you the code, I'll show you the usage, but you should think about the solution and how it applies to your problems. Maybe you'll see that it can be useful and you'll adopt it. Maybe not. Maybe you'll remember it a year from now and suddenly see the light. Maybe you'll use it for a month or two and then decide that I'm a twit and that your previous ways were better. [In this case, please refer to the last paragraph in this post.] I don't really care. I'm giving my ideas out, I'm glad of every feedback and I'm happy if somebody else is using my code. I've received a lot from the Delphi community in past ten years and I like to give back as much as I can.

2. I don't like the 'traverse the list backwards and all will be well' approach to list deletion. I have a really strong aversion to it. It looks fine, it works fine but it smells really bad.

I was thinking a lot about why I dislike it so much and I think that the main reason is that it relies too much on the list internals. Admittedly, this is a very well documented behavior, which will never change in the future. Let's try again - it relies too much on the internals of the underlying structure. If at some point in the future you change the structure, this assumption may fail. OK, it is hard to imagine an underlying structure that would behave almost the same as the TList but would behave differently on Delete, but what if you just forgot to adapt this part of the code to the new structure? Unlikely, I agree.

People will complain that I can't give a strong reason for my fears and they will be correct. I don't have any good cause against this kind of list cleanup.

You see, if I learned something in my programming career, it's that some code should never be used. When you program, some things work out and some not. Some code works fine from the beginning and some causes problem after a problem and bug report after a bug report. A good programmer recognizes that and stops using the code/fragments/patterns that are likely to cause problems in the future. Somehow I have classified the 'downto deletion' in the same category. I don't remember why. That's why I'm not saying it is bad, only that it smells bad.

The other reason is the code readability. If find the code that explicitly states its intention more readable and more maintainable. Your point may, of course, differ.

---

May my ramblings not stop you from commenting. I'm happy when I receive comments that point to my errors, however justifiable they may be. Even if I don't agree with them, they make me rethink my strategies so at the end I'm even more sure in my approach.

Flame on!

Labels: ,

Monday, March 03, 2008

Debugging With Lazy Breakpoints

Sometimes you're hunting down a problem that occurs very rarely. When it does, an exception is triggered. If you're debugging, Delphi should pop up (it does) and allow you to look at the stack and inspect local variables (it doesn't - happens a lot to me). What can you do?

You could set a conditional breakpoint just before the problem occurs and test for the problem causing condition here. That would work on a short run, but if you're hunting the problem for many weeks, the breakpoint may move around or even get disabled. Delphi's breakpoint storage mechanism is somewhat unstable if you edit the code around the breakpoint a lot (and even more if you edit the code on multiple computers).

More stable solution is to do everything in code, from testing if debugger is running to pausing the program and breaking into the debugger. Following fragment illustrates the approach:

if DebugHook <> 0 then // test if debugger is running
if problem = true then //test the condition
asm int 3 end; //break into debugger

Not very obvious and full of arcane knowledge, but it works.

Labels: , ,

Saturday, March 01, 2008

Paint It Black (And Red)

Just in case you're not following Julian's blog, I'd like to bring your attention to his new series on red-black trees. Red-black trees are great data structure, but are notoriously hard to code correctly as the algorithm is full of weird edge cases.

I've always been a follower of his 'algorithms and data structures' articles in The Delphi Magazine and I'm sure his new series will be presented at the equally high standard. He's using C#, VB and .NET but I'm sure the series will be interesting for us Delphi users, too. [And he already gave us RB tree implementation in the EZDSL library - thanks, Julian!]

Of course, if you own his DADS book, you can read all about RB trees there.

Labels: , ,

Wednesday, October 10, 2007

Why You Should *Always* Use SQL Parameters

http://xkcd.com/327/

 

XKCD is just great.

Labels: , ,

Friday, July 13, 2007

What I Tell You Three Times is True

When I was young and stupid, I made modifications to my code at will. A little here, a little there, a little everywhere. And I always knew what I changed and what I have to undo and what is still missing. Unless I forgot.

When I was a little older, I started marking changes that had to be undone with comments.

Then Borland (or was it Inprise at that time? Who cares) added TODO support to the editor and since then I'm writing

  // TODO 1 -oPrimoz Gabrijelcic : only for debugging, remove!
ThisIsUsedOnlyForDebugging;

This makes a nice entry in my TODO list, which is visible at all times.


Later I started adding warnings that don't show in the TODO list, but in the Messages window:

{$IFDEF LogSomething}
{$MESSAGE WARN 'logging of Something enabled'}

An added bonus of this approach is that $MESSAGE messages are shown by the command-line compiler. I always write code that contains no hints nor warnings and such messages really stand up from the normal compiler output.

Recently, however, I started to doubt my sanity. I caught myself writing

// TODO 1 -oPrimoz Gabrijelcic : testing, remove!
{$MESSAGE WARN 'An important module is disabled'}
rrEventLogger.LogEvent('An important module is disabled!', svWarning);
// Process;

(In reality, messages were more descriptive.) First line is warning me to remove this code before creating a release version of the program. Second line warns me when I'm compiling the code and the third one logs a warning into program's log. I had a reason for all this - I had to run this version at a customer for debugging purposes but I didn't want it to be used anywhere else.

Still, I'm asking myself since this day - am I being overprotective to myself?

Labels: , ,