Kodewerx

Our culture has advanced beyond all that you could possibly comprehend with one hundred percent of your brain.
It is currently Sat Jan 25, 2020 7:07 pm

All times are UTC - 8 hours [ DST ]




Post new topic Reply to topic  [ 15 posts ] 
Author Message
PostPosted: Sat Feb 16, 2008 10:42 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
Quote:
How does one solve the problems with current debuggers? First, by identifying those problems. Next by addressing them. Finally, in implementation.

So what are the problems with debuggers integrated in today's emulators? Well, for one thing, they are integrated. This can cause portability problems, in many cases (I am ashamed to admit my guilt in perpetuating this problem, by writing debuggers that vendor lock users to the Windows operating system). It can also cause undue stress for debugger developers. We are a lazy species, and we do not like rewriting the same debugger multiple times, attempting to port our work to a newer, better emulator, or porting it to a completely new emulated architecture. And then there is the problem of features, or lack thereof. Some hackers and homebrewers need specialized features in their debuggers.

Modularity is one possible solution to these problems. The first thing to do is segregate the low-level debug primitives (functions and whatnot) from the user interface; make the interface modular, interchangeable with any interface. Then you define how the debug primitives interact with the interface via a communications link; make the communications link modular, able to establish communication using any number of interchangeable modules for TCP/IP sockets, operating system pipes, RS232, USB, etc. Next, you define the protocol; make the protocol modular, a 'universal language' that describes generic debug primitives, and allow it to be extensible as necessary. Finally, you define those debug primitives and provide a base implementation that can be expanded if required. However, a well-defined set of primitives is unlikely to need expansion for anything but the most exotic architecture configurations.

What does all of this mean? Where does it leave us, the debugger developers? And where does it place the users, the hackers, and the homebrew developers?

It means that the debugger developers can implement an accepted standard (accepted being the keyword) for debugger support within not only emulators, but any kind of virtual machine or interpreted byte code in any kind of program. It could be a simple set of debug primitives (in a static or linked library, for example) added by an emulator author (or emulator extender) that connects to a debugger interface of the user's choice. The interface might be highly specialized for a particular architecture, or it might be very complex and advanced with universal support for many architectures. This would put a large number of options into the hands of users.

Now let me try to get a more solid description of this idea out there. The number one underlying technology to be assessed to make any of this work is simply the protocol. That means, a formal description of how a target (an emulator, or other program wishing to use debugger functionality) talks to an interface (a separate program designed to give the user direct access to the debug primitives and link them together in ways that provide many very advanced features ... such as stepping backwards in architecture-time). This would probably be a command reference which supplies things like:

1) A description of the architecture (the emulated system, like NES). This description would include the number of CPUs available, the type of the CPUs, endianness, memory maps as accessible by the CPU, memory maps not accessible to the CPU, etc. Basically a complete virtual model of the architecture.
2) Debug primitives: breakpoints and stepping functionality; read/write access to the memory maps, cpu registers and statuses, and access to internal hardware registers; interrupt and exception handling; scripted macros with callback functions; essentially all of the basic functions which the interface can use to procedurally create high-level features.
3) Extensibility; able to provide expansions to architecture descriptions, debug primitives, and other specialty features.

With such a protocol in place, the interface can do the rest of the high-level work; disassembling, video memory viewing and modification, hex editing, cheat searching and management, etc.

I'm hoping this has been verbose enough that you all understand where I am coming from, but not too verbose that I've created confusion or completely went the wrong direction in the discussion.

Bottom line is, I think we only need to agree on one thing: the protocol. If you refuse to believe that, and only want to do your own thing with your own emulator, that's quite alright. But if you want to reap the benefits of interchangeable debugger interfaces [pick your favorite, or just choose the right one for the job at hand] that are platform-independent [can run on any host operating system, even a completely different machine from the target emulator; not at all bound to the target emulator] and potentially architecture-independent [capable of debugging NES, Genesis, PS2, Wii, Java, brainf**k, the custom scripting language in your new game, you name it!] then I say let's work some crazy Voodoo and invent ourselves a standard for modern debugging!


That said, RFC-909, Loader Debugger Protocol looks like a good place to start.

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Sun Feb 17, 2008 11:31 am 
Offline
Komrade
Komrade

Joined: Tue Mar 27, 2007 10:18 am
Posts: 1328
Parasyte wrote:
Quote:
such as stepping backwards in architecture-time)

First, I'd just like to say that backtracing makes Hextator wet.

Second, I'm an idiot and can't understand half of what you said, but it looks like something we should have been doing all along. We, as in the people who actually do all the work (myself excluded it seems). >.>''

I can at least understand that "standards are fun". Well, usually. Standardized testing isn't a good example.

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Thu May 08, 2008 2:06 am 
Offline
Kommunist
Kommunist

Joined: Tue Oct 17, 2006 1:37 pm
Posts: 4
Location: Germany
Parasyte wrote:
That said, RFC-909, Loader Debugger Protocol looks like a good place to start.

Nice find, Parasyte. I never knew that something like this exists.

What is also interesting is RFC 908: "The Reliable Data Protocol (RDP) is designed to provide a reliable data transport service for packet-based applications such as remote loading and debugging." RFC 1151 defines version 2 of RDP, and the Reliable UDP Protocol (RUDP) is based on both RFCs. An implementation of RUDP can be found here.

I'd love to implement LDP and RUDP for the PS2...

Advancing LDP to version 2 would be a great task, too. Do you already have any concret ideas?


Top
 Profile  
Reply with quote  
PostPosted: Thu May 08, 2008 5:03 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
Plan 9 from User Space! At one point, I wanted to install Plan 9 just to have a look through it. I think a few specs of LDP are probably archaic, by now, and probably wouldn't have much use even at the time it was designed. Though I can't think of any specific examples at the moment. At least one thing it definitely needs is a greater memory range, at least up to 128-bit (which will last us all several years to come). It could also use some form of segmentation in memory ranges. No need to use a 128-bit memory range on an architecture which only supports 16-bit, for example.

Also, some thought might be put into a translation layer, especially for architectures incapable of implementing an ethernet protocol. A quick example could be a GameBoy connected to a UNIX box through a serial port. A translation layer would then be required to translate LDP data to serial streams, and vice versa. That could be accomplished through a small daemon designed specifically for this task. But defining such a compatible serial protocol AND the translation layer would be quite valuable.

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Fri May 09, 2008 12:53 am 
Offline
Komrade
Komrade
User avatar

Joined: Tue Mar 27, 2007 6:23 pm
Posts: 1354
Location: Mario Raceway, 1509.831, 217.198, -564.429
Title: Mario Kart 64 Hacker
But is there any need for a remote hardware debugger on a Game Boy in this day and age? Game Boy emulators work great. DS, maybe.

_________________
Image 143
HyperNova Software is now live (but may take a few tries to load) currently down; check out my PSP/DS/Game Boy/Windows/Linux homebrew, ROM hacks, and Gameshark codes!


Top
 Profile  
Reply with quote  
PostPosted: Sat May 10, 2008 10:49 am 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
It could be any embedded system. I just used GameBoy as an example. Another example would be your router/Wi-Fi access point.

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 29, 2009 4:30 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
Major update:

The protocol now has a name, Scalable Remote Debugger Protocol (SRDP)
Some collaboration has been ongoing at GSHI: http://gshi.org/vb/showthread.php?t=3286
An actual proposal is in the works: http://www.kodewerx.org/wiki/index.php/ ... l#Proposal

As the proposal comes to fruition (receives feedback) I will begin implementing it. This implementation means a usable [and portable] client/server library which can be used in many kinds of projects. I am thinking of writing the library in plain old C, but I'm also looking for other suggestions. Writing it in C++ is another possibility, and may even make working with the protocol data structure easier. However, there is still a lot to consider before going forward with any implementations.

This is very important progress being made, here. A "scalable" protocol is the only thing stopping me, personally, from adding a debugger to a certain decent-ish open source N64 emulator. It's also the reason I never completed NTRrd, nor finished the GCNrd rewrite. Joy!

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Sat Jan 09, 2010 7:42 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
The ball is rolling!

I'm starting to spread information about SRDP to various forums, trying to spark interest and conversation. The IRC channel needs some loving:

irc.freenode.org
#srdp

And this weekend, I am focusing on all aspects of the project. Foremost, a library (libsrdp) which will serve as a reference implementation and test suite. The API and ABI details are the most important at this time. I'll have a Mercurial source code repository for the current state of work. I'll hold discussions on debugging in IRC, as well as some random talk of implementation details and whatnot.

I just want to get people involved and interested right now. And as a central community is formed, actual roles will be developed for every individual who wants to do something to help out! There is really a lot available for anyone to help out with. First step: join the IRC channel and start a discussion, or join one in progress!

We look forward to hearing from you all!

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Tue Jul 27, 2010 11:33 am 
Offline
Komrade
Komrade

Joined: Tue Mar 27, 2007 10:18 am
Posts: 1328
Quote:
VLIs (variable-length integers) and their affect

http://www.kodewerx.org/wiki/index.php/ ... r_Protocol

My log in creds aren't working anymore and I'm lazy.

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Tue Jul 27, 2010 2:08 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
I had to think this one over, but I believe "affect" is the correct word in that context. http://en.wiktionary.org/wiki/affect#Usage_notes

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Tue Jul 27, 2010 7:32 pm 
Offline
Komrade
Komrade

Joined: Tue Mar 27, 2007 10:18 am
Posts: 1328
As I understand it, when you "affect" something, you "have an effect" on it.

Quote:
* “...new governing coalitions during these realigning periods have effected major changes in governmental institutions.”
* “...new governing coalitions during these realigning periods have affected major changes in governmental institutions.”

That is the only time I can perceive where the sentence I've given doesn't help with determining which word to use. However, in the examples given above, it makes sense just the way the wiki explains it: to effect major changes means to bring them about, but to affect them means to alter the major changes that are already taking place.

In the case of your entry, it still seems like "effect" is more appropriate, because it is about an "effect that results", rather than stating a thing was "affected".

If you still say that's wrong and this oddity of our language is lost on me, I don't know what you can do to make it make sense to me.

Go to Google. Type "and their affect on" and "and their effect on". Google will choose the results for the latter in both cases, and note how the things "being affected" are still things that already exist.

:/

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 28, 2010 7:04 am 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
The noun in that sentence is "the format". The "format", although not completely specified, can be considered already "in existence". The only manifestation resulting from the introduction of VLIs into the format is the VLI encoding (which is a silly thing to state, but there it is). Without VLI encoding, numbers would be represented the same way as floating point numbers; as a null-terminated UTF-8 string (making it possible to combine unsigned and signed numbers, and floating point numbers into a single data type).

Anyway, the introduction of VLIs into the format is only changing something that already exists (the format), rather than manifesting something new, or being the result of an action. And that fits into "affect" better than "effect". I could be totally wrong, but that was my logic when I initially wrote that line (can you believe I actually did the research when I chose that word so many months ago? Well, I did. That's why I pointed out the Wiktionary article ... I've read it countless times trying to make sense of my grammar.)

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 28, 2010 4:30 pm 
Offline
Komrade
Komrade

Joined: Tue Mar 27, 2007 10:18 am
Posts: 1328
Yes, I understand that was your intent, and it does fit with the Wiktionary article, but I've never seen the word used that way and have more often, even in scholarly contexts where you'd be more likely to presume what you're reading is accurate, seen the word "effect" used in such a context instead.

Of course, this is the difficulty of wielding a kludgy language like English.

_________________
Image


Top
 Profile  
Reply with quote  
PostPosted: Wed Jul 28, 2010 4:54 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
Correct. English does suck and the verb "effect" is far more common. "Affect" is typically used in emotive contexts (as in, affection, affectionate).

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
PostPosted: Thu Nov 04, 2010 6:19 pm 
Offline
Krew (Admin)
Krew (Admin)
User avatar

Joined: Sun Oct 01, 2006 9:26 pm
Posts: 3765
Title: All in a day's work.
After some serious consideration, it's apparent that a VLI "format" just isn't enough; the data still needs to be worked with and manipulated. To this end, I started looking at the options available for arbitrary precision math libs. My first choice was GMP, since it's fairly ubiquitous, and supports just about every operation you could ever possibly want.

So, the first bit of research I did while looking into GMP was to find its "native" serialized data output format. What I found was mpz_out_raw(), which does exactly that: outputs a "raw" serialized byte stream that represents the number. I thought that was pretty awesome until I read:

Quote:
The size is 4 bytes ...

That means the smallest possible number (0) will be represented in 5 bytes; a 4-byte "size" header, and 1 byte of data. That's just unacceptable for SRDP. The original thought of using VLIs in the first place was to have a super-compact byte stream for all of the serialized data. If all addresses and registers, etc. require a 4-byte size indicator, that just destroys the whole advantage of using any arbitrary precision numbering system.

GMP is too big for my purposes, anyway. SRDP doesn't need support for rational numbers, floating point numbers, natural numbers, or the random number functions that GMP provides. So I started looking for other options, and I came across LibTomMath.

At first glance, LibTomMath looks seriously bad ass. It's lightweight, "decently optimized", and he includes a whole book (in PDF format) in the source code archive that details the library, its uses, and rationale. The library is also in the public domain, so that's a huge win.

The research stage begins again: 1) serializing is done with mp_to_unsigned_bin(). 2) the byte stream size is gathered with mp_unsigned_bin_size(). It's an interesting approach; the byte stream size and data are separated.

So a size header is still required. But 4 bytes is serious overkill. How about just a single byte? The properties of a single-byte size header go something like this:

  • Approximately 1 in every 256 bytes will be serialized to a single byte. 0 = 0x00.
  • A number up to 255-bytes in length will be serialized to 256 bytes or less.

The special case for the number 0 is represented as a single byte: a size header of "0" indicating zero data bytes. Where other numbers like 1 would have the size header, and a single data byte = two bytes total.

But, let's stop there, for a moment. What do I mean by "a number up to n-bytes in length?" I'm talking about a number serialized in base-256. That means, each byte is a single digit, and each digit has 256 states. (Think base-10: each digit has 10 possible states.) If we have a 2-byte number, the highest possible value it can store is 256^2 - 1 = 65,535. And a 255-byte number can store a maximum value of 256^255 - 1 =

Broken up for readability:
126,238,304,966,058,622,268,417,487,065,116,999,845,484,776,053,576,109,500,509,161,826,268, \
184,136,202,698,801,551,568,013,761,380,717,534,054,534,851,164,138,648,904,527,931,605,160, \
527,688,095,259,563,605,939,964,364,716,019,515,983,399,209,962,459,578,542,172,100,149,937, \
763,938,581,219,604,072,733,422,507,180,056,009,672,540,900,709,554,109,516,816,573,779,593, \
326,332,288,314,873,251,559,077,853,068,444,977,864,803,391,962,580,800,682,760,017,849,589, \
281,937,637,993,445,539,366,428,356,761,821,065,267,423,102,149,447,628,375,691,862,210,717, \
202,025,241,630,303,118,559,188,678,304,314,076,943,801,692,528,246,980,959,705,901,641,444, \
238,894,928,620,825,482,303,431,806,955,690,226,308,773,426,829,503,900,930,529,395,181,208, \
739,591,967,195,841,536,053,143,145,775,307,050,594,328,881,077,553,168,201,547,775


Perhaps you can appreciate the scope of a single-byte size header. Now just imagine the number that maxes out GMP's 4-byte size header. I will leave that as an exercise for you, dear reader, since bc refuses to calculate 256^4294967295 - 1.

This seems like a solid plan, assuming no one needs numbers any larger than that. A pretty safe assumption, I'll bet.

Since arbitrarily large numbers like this will be so rare, I'm planning an API to convert these VLIs into 64-bit integers. The API will fetch the VLI data, deserialize it, and return a 64-bit (un)signed integer if the number will fit. Or else it will return an error, and the number can still be processed/handled using the LibTomMath functions.

This theory fits elegantly with the "Scalable" part of SRDP. I'm still open to thoughts and opinions...

_________________
I have to return some video tapes.

Feed me a stray cat.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 15 posts ] 

All times are UTC - 8 hours [ DST ]


Who is online

Users browsing this forum: Brandwatch Magpie-Crawler and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group