Hi,
I'm looking for an algo that would convert a list such as:
I'm using python to prototype the algo: this will move to C in an embedded
system where an int has 16 bits  I do not wish to use any python library.
l1 = [1,2,3,4,6,7,8] #represents the decimal number 12345678
l2 = func (l1)
# l2 = [0x1, 0x2, 0xD, 0x6, 0x8, 0x7] #represents 0x12D687
Regards,
Philippe
Jul 30 '06
67 5577
Sorry forgot a few answers/comments:
John Machin wrote:
>**** SHOULD BE >= currently add([6, 6], [4, 4] [10, 10]
True, thanks
*** try  10 instead of % 10
If the first operand is 19, you have a bug!
This might save a few CPU cycles on your smartcard
can it ? each array value will be [0..9]
>>**** SHOULD CHECK FOR CARRY AT END currently add([9], [8]) [7] should return [1, 7] or handle overflow somehow
True, this actually should become an error from the card to the calling
device. That would be a pretty large number though.
>>**** MINUS ZERO?
I just do not want to leave it at one(1) if it is.
Regards,
Philippe
Philippe Martin wrote:
John Machin wrote:
So why don't you get a freely available "bignum" package, throw away
the bits you don' t want, and just compile it and use it, instead of
writing your own bugridden (see below) routines? Oh yeah, the bignum
package might use "long" and you think that you don't have access to
32bit longs in the C compiler for the 8bit device that you mistook
for an arm but then said is an Smc8831 [Google can't find it] with a
CPU that you think is a SC88 [but the manual whose URL you gave is for
an S1C88] ...
Thanks for the fixes  still looking at it.
You are correct, all "bignum" packages I found needed 32 bits.
Yes I still see from my documentation that there is no "long" handled by my
compiler.
Have you actually tried it? Do you mean it barfs on the word "long"
[meaning that it's not an ANSIcompliant C compiler], or that "long" is
only 16 bits?
>
I did make a mistake on the CPU (and I really do not care what it is)  you
wanted some ref (I still do not see why)
because (1) [like I said before] gcc appears to be able to generate
code for a vast number of different CPUs (2) because I find it very
difficult to believe that a C compiler for the CPU on a device in
current use won't support 32bit longs  and so far you have presented
no credible evidence to the contrary
and I googled S1C88 and sent you a
link as that is the name of the compiler's directory.
and is that or is it not the correct link for the documentation for the
compiler that you are using??????
>
The reason I first came here was to not have to write my "... own
bugridden ..." (how nice) ... I have plenty of other bugs to write first.
John Machin wrote:
Have you actually tried it? Do you mean it barfs on the word "long"
[meaning that it's not an ANSIcompliant C compiler], or that "long" is
only 16 bits?
:) if the documentation tells me there is no 32 bit support, why should I
not believe it ?
because (1) [like I said before] gcc appears to be able to generate
code for a vast number of different CPUs (2) because I find it very
difficult to believe that a C compiler for the CPU on a device in
current use won't support 32bit longs  and so far you have presented
no credible evidence to the contrary
I can recall working on a sparclite many years ago (32 bits) with a
xcompiler called g++ (supported by cygnus) that handled the type "long
long" = 64 bits.
As far as the credible evidence ... you're hurting my feelings ;)
>
> and I googled S1C88 and sent you a link as that is the name of the compiler's directory.
and is that or is it not the correct link for the documentation for the
compiler that you are using??????
Neither can I !  never found any documentation online ... got it from my
device supplier.
Regards,
Philippe
Philippe Martin wrote:
Sorry forgot a few answers/comments:
John Machin wrote:
**** SHOULD BE >=
currently add([6, 6], [4, 4] [10, 10]
True, thanks
*** try  10 instead of % 10
If the first operand is 19, you have a bug!
This might save a few CPU cycles on your smartcard
can it ? >
can WHAT do WHAT?
each array value will be [0..9]
if so, you can use  10 instead of %10
if not, then whatever produced your input has a bug
>
>**** SHOULD CHECK FOR CARRY AT END currently add([9], [8]) [7] should return [1, 7] or handle overflow somehow
True, this actually should become an error from the card to the calling
device. That would be a pretty large number though.
>**** MINUS ZERO?
I just do not want to leave it at one(1) if it is.
The question is "what do you think you are achieving by having a MINUS
sign in front of the zero instead of plain old ordinary zero?"
Philippe, please! The suspense is killing me. What's the cpu!?
For the love of God, what's the CPU?
Ican'ttakeitanymoreit'ssuchasimplequestioningly yours,
~Simon
Simon Forman wrote:
Philippe, please! The suspense is killing me. What's the cpu!?
For the love of God, what's the CPU?
Ican'ttakeitanymoreit'ssuchasimplequestioningly yours,
Yes, please .....
I've found a C compiler manual on the web for the Epson S1C33 CPU as
well as the one for the S1C88 that Philippe pointed me at. They have
two things in common:
(1) explicitly mention support for 32bit longs
(2) in the bottom right corner of most pages, it has the part number
(which includes S1Cxx) and the version number.
Philippe has what he believes to be the manual for the C compiler for
the CPU in the device, but couldn't find it on the web.
Perhaps if Philippe could divulge the part number that's in the bottom
right corner of the manual that he has, and/or any part number that
might be mentioned in the first few pages of that manual, enlightenment
may ensue ....
Cheers,
John
John Machin wrote:
>
Simon Forman wrote:
>Philippe, please! The suspense is killing me. What's the cpu!?
For the love of God, what's the CPU?
Ican'ttakeitanymoreit'ssuchasimplequestioningly yours,
Yes, please .....
I've found a C compiler manual on the web for the Epson S1C33 CPU as
well as the one for the S1C88 that Philippe pointed me at. They have
two things in common:
(1) explicitly mention support for 32bit longs
(2) in the bottom right corner of most pages, it has the part number
(which includes S1Cxx) and the version number.
Philippe has what he believes to be the manual for the C compiler for
the CPU in the device, but couldn't find it on the web.
Perhaps if Philippe could divulge the part number that's in the bottom
right corner of the manual that he has, and/or any part number that
might be mentioned in the first few pages of that manual, enlightenment
may ensue ....
Cheers,
John
That was cute ... over and out !
Long live Python.
A+
Philippe
On 20060801, Philippe Martin <pm*****@snakecard.comwrote:
>Perhaps if Philippe could divulge the part number that's in the bottom right corner of the manual that he has, and/or any part number that might be mentioned in the first few pages of that manual, enlightenment may ensue ....
That was cute ... over and out !
Or perhaps it may not.
Methinks it was all just a rather good troll.

Grant Edwards grante Yow! Where's the Coke
at machine? Tell me a joke!!
visi.com
Grant Edwards wrote:
On 20060801, Philippe Martin <pm*****@snakecard.comwrote:
Perhaps if Philippe could divulge the part number that's in
the bottom right corner of the manual that he has, and/or any
part number that might be mentioned in the first few pages of
that manual, enlightenment may ensue ....
That was cute ... over and out !
Or perhaps it may not.
Methinks it was all just a rather good troll.

Now we have a few more questions i.e. apart from what CPU is in
Phillipe's device:
1. WHO was Philippe replying to  Simon or me?
2. WHAT was cute?
3. Grant thinks WHAT might have been a rather good troll by WHOM?
Ah well never mind ... I think I'll just report the whole thread to
thedailywtf and move on :)
John Machin wrote:
br***********************@yahoo.com wrote:
Philippe Martin wrote:
Yes, I came here for the "algorithm" question, not the code result.
To turn BCD x to binary integer y,
set y to zero
for each nibble n of x:
y = (((y shifted left 2) + y) shifted left 1) + n
Yeah yeah yeah
i.e. y = y * 10 + n
he's been shown that already.
Problem is that the OP needs an 8decimaldigit (32bits) answer, but
steadfastly maintains that he doesn't "have access to" long (32bit)
arithmetic in his C compiler!!!
And he doesn't need one. He might need the algorithms for shift and
add.

Bryan br***********************@yahoo.com wrote:
John Machin wrote:
br***********************@yahoo.com wrote:
Philippe Martin wrote:
Yes, I came here for the "algorithm" question, not the code result.
>
To turn BCD x to binary integer y,
>
set y to zero
for each nibble n of x:
y = (((y shifted left 2) + y) shifted left 1) + n
Yeah yeah yeah
i.e. y = y * 10 + n
he's been shown that already.
Problem is that the OP needs an 8decimaldigit (32bits) answer, but
steadfastly maintains that he doesn't "have access to" long (32bit)
arithmetic in his C compiler!!!
And he doesn't need one. He might need the algorithms for shift and
add.
I hate to impose this enormous burden on you but you may wish to read
the whole thread. He was given those "algorithms". He then upped the
ante to 24 decimal digits and moved the goalposts to some chip running
a cutdown version of Java ...
TTFN
John
John Machin wrote:
br***********************@yahoo.com wrote:
John Machin wrote:
br***********************@yahoo.com wrote:
To turn BCD x to binary integer y,
set y to zero
for each nibble n of x:
y = (((y shifted left 2) + y) shifted left 1) + n
>
Yeah yeah yeah
i.e. y = y * 10 + n
he's been shown that already.
>
Problem is that the OP needs an 8decimaldigit (32bits) answer, but
steadfastly maintains that he doesn't "have access to" long (32bit)
arithmetic in his C compiler!!!
And he doesn't need one. He might need the algorithms for shift and
add.
I hate to impose this enormous burden on you but you may wish to read
the whole thread. He was given those "algorithms".
Quite some longwinded code and arguing about platforms in the rest
of the thread. My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them. Assuming we have
them, the algorithm is three lines long. Don't know why people
have to make such a big deal of a BCD converter.
He then upped the
ante to 24 decimal digits and moved the goalposts to some chip running
a cutdown version of Java ...
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.

Bryan br***********************@yahoo.com wrote:
>My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them.
Assuming we have
them, the algorithm is three lines long.
Perhaps you could enlighten us by publishing (a) the spec for each of
the get_nibble(s), shift, and add subroutines (b) the threeline
algorithm (c) what the algorithm is intended to achieve ...
>
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.
Yes, input was a list [prototyping a byte array] of decimal digits. The
OUTPUT was also a list of something. A few messages later, it became
clear that the output desired was a list of hexadecimal digits. Until
he revealed that the input was up to 24 decimal digits, I was pursuing
the notion that a solution involving converting decimal to binary (in a
32bit long) then to hexadecimal was the way to go.
What is apparently needed is an algorithm for converting a "large"
number from a representation of one base10 digit per storage unit to
one of a base16 digit per storage unit, when the size of the number
exceeds the size (8, 16, 32, etc bits) of the "registers" available. Is
that what you have?
Cheers,
John
John Machin wrote:
br***********************@yahoo.com wrote:
My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them.
Assuming we have
them, the algorithm is three lines long.
Perhaps you could enlighten us by publishing (a) the spec for each of
the get_nibble(s), shift, and add subroutines (b) the threeline
algorithm (c) what the algorithm is intended to achieve ...
"For each nibble n of x" means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant. "Adding"
integers and "shifting" binary integers is welldefined
terminology. I already posted the threeline algorithm. It
appeared immediately under the phrase "To turn BCD x to binary
integer y," and that is what it is intended to achieve.
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.
Yes, input was a list [prototyping a byte array] of decimal digits. The
OUTPUT was also a list of something. A few messages later, it became
clear that the output desired was a list of hexadecimal digits. Until
he revealed that the input was up to 24 decimal digits, I was pursuing
the notion that a solution involving converting decimal to binary (in a
32bit long) then to hexadecimal was the way to go.
What is apparently needed is an algorithm for converting a "large"
number from a representation of one base10 digit per storage unit to
one of a base16 digit per storage unit, when the size of the number
exceeds the size (8, 16, 32, etc bits) of the "registers" available.
I read his "Yes I realized that after writing it." response to
Dennis Lee Bieber to mean Bieber was correct and what he wanted
was to go from BCD to a normal binary integer, which is base 256.
The point of posting the simple highlevel version of the
algorithm was to show a general form that works regardless of
particular languages, register sizes and storage considerations.
Those matters can effect the details of how one shifts a binary
integer left one bit, but shifting is not complicated in any
plausible case.
Is that what you have?
I'm sorry my post so confused, and possibly offended you.

Bryan br***********************@yahoo.com wrote:
John Machin wrote:
br***********************@yahoo.com wrote:
>My version assumes three subroutines: extracting
nibbles, shifting, and adding, Those are pretty simple, so I asked
if he needed them rather than presenting them.
Assuming we have
them, the algorithm is three lines long.
Perhaps you could enlighten us by publishing (a) the spec for each of
the get_nibble(s), shift, and add subroutines (b) the threeline
algorithm (c) what the algorithm is intended to achieve ...
"For each nibble n of x" means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant.
The OP's input, unvaryingly through the whole thread, even surviving to
his Javacard implementation of add() etc, is a list/array of decimal
digits (0 <= value <= 9). Extracting a nibble is so simple that
mentioning a "subroutine" might make the gentle reader wonder whether
there was something deeper that they had missed.
"Adding"
integers and "shifting" binary integers is welldefined
terminology.
Yes, but it's the *representation* of those integers that's been the
problem throughout.
I already posted the threeline algorithm. It
appeared immediately under the phrase "To turn BCD x to binary
integer y," and that is what it is intended to achieve.
Oh, that "algorithm". The good ol' num = num * base + digit is an
"algorithm"???
The problem with that is that the OP has always maintained that he has
no facility for handling a binary integer ("num") longer than 16 bits
 no 32bit long, no bignum package that didn't need "long", ...
>
He took a while to state the problem, but was clear from the start
that he had lists of digits rather than an integer datatype.
Yes, input was a list [prototyping a byte array] of decimal digits. The
OUTPUT was also a list of something. A few messages later, it became
clear that the output desired was a list of hexadecimal digits. Until
he revealed that the input was up to 24 decimal digits, I was pursuing
the notion that a solution involving converting decimal to binary (in a
32bit long) then to hexadecimal was the way to go.
What is apparently needed is an algorithm for converting a "large"
number from a representation of one base10 digit per storage unit to
one of a base16 digit per storage unit, when the size of the number
exceeds the size (8, 16, 32, etc bits) of the "registers" available.
I read his "Yes I realized that after writing it." response to
Dennis Lee Bieber to mean Bieber was correct and what he wanted
was to go from BCD to a normal binary integer, which is base 256.
Where I come from, a "normal binary integer" is base 2. It can be
broken up into chunks of any size greater than 1 bit, but practically
according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
when is base 256 "normal" and in what sense of normal?
The OP maintained the line that he has no facility for handling a
base256 number longer than 2 base256 digits.
The dialogue between Dennis and the OP wasn't the epitome of clarity:
[OP]
My apologies, I clearly made a mistake with my calculator, yes the
resulting array I would need is [0xb,0xc,0x6,0x1,0x4,0xe]
[Dennis]
Take note that this[**1**] is NOT a BCD form for "12345678". BCD
(typically
packed) uses four bits per decimal digit. That would make "12345678" =>
0x12, 0x34, 0x56, 0x78 (ignoring matters of big/little end).
The binary representation of 12345678, in bytes, is 0xBC, 0x61, 0x4E
0xb, 0xc... is really 0x0B, 0x0C... 8bits per byte, with MSB set to
0000.
Compare:
BCD 00010010 00110100 01010110 01111000
binary 10111100 01100001 01001110
your 00001011 00001100 00000110 00000001 00000100 00001110
[OP]
Yes I realized that [**2**] after writing it.
.... [**1**] Dennis's "this" refers to the OP's *output* which is
patently not what the OP was calling BCD.
[**2**] The referent of the OP's "that" can't be determined
unambiguously, IMHO.
The point of posting the simple highlevel version of the
algorithm was to show a general form that works regardless of
particular languages, register sizes and storage considerations.
Those matters can effect the details of how one shifts a binary
integer left one bit, but shifting is not complicated in any
plausible case.
Is that what you have?
I'm sorry my post so confused, and possibly offended you.
It didn't confuse me. I was merely wondering whether you did in fact
have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
without assembling the number in some much larger base b3 (e.g. 256).
Offended? Experts have tried repeatedly, and not succeeded :)
Cheers,
John
"John Machin" <sj******@lexicon.netwrote:
 br***********************@yahoo.com wrote:

 >My version assumes three subroutines: extracting
 nibbles, shifting, and adding, Those are pretty simple, so I asked
 if he needed them rather than presenting them.
 Assuming we have
 them, the algorithm is three lines long.

 Perhaps you could enlighten us by publishing (a) the spec for each of
 the get_nibble(s), shift, and add subroutines (b) the threeline
 algorithm (c) what the algorithm is intended to achieve ...

 >
 He took a while to state the problem, but was clear from the start
 that he had lists of digits rather than an integer datatype.

 Yes, input was a list [prototyping a byte array] of decimal digits. The
 OUTPUT was also a list of something. A few messages later, it became
 clear that the output desired was a list of hexadecimal digits. Until
 he revealed that the input was up to 24 decimal digits, I was pursuing
 the notion that a solution involving converting decimal to binary (in a
 32bit long) then to hexadecimal was the way to go.

 What is apparently needed is an algorithm for converting a "large"
 number from a representation of one base10 digit per storage unit to
 one of a base16 digit per storage unit, when the size of the number
 exceeds the size (8, 16, 32, etc bits) of the "registers" available. Is
 that what you have?

 Cheers,
 John
I actually read most of this thread as it happened and could not really figure
out what the OP was on about.
If the above is a true statement of the problem, then its more difficult to do
in a high level language, when the results exceed the native size that the
compiler or interpreter writers thought was a reasonable number of bits.
 ten to the 24 is of the order of 80 binary bits ...
So you need a (say) twelve byte result field for the binary... (thats three 32
bit values concatenated)
you clear the result field out to zero.
Then you feed in the decimal digits, from the most significant side, into a
routine that multiplies the result by ten and then adds the digit. (yes you have
to write this twelve byte Ascii/binary thing yourself)
When you have done this for all the digits, you have a binary number, and
getting hex from binary a nibble at a time is easy...
Well its easy in assembler, even on a cripple little 8 bit processor, anyway...
In python I would take a hard look at what I could do with the decimal module 
doing the reverse of the above but dividing by 16 repetitively and using the
remainder or the fraction to give the hex numbers in lsb to msb order, and doing
a lookup (prolly using a dict) to get the hex digits...
just my $0.02...
 Hendrik
John Machin wrote:
br***********************@yahoo.com wrote:
"For each nibble n of x" means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant.
The OP's input, unvaryingly through the whole thread, even surviving to
his Javacard implementation of add() etc, is a list/array of decimal
digits (0 <= value <= 9). Extracting a nibble is so simple that
mentioning a "subroutine" might make the gentle reader wonder whether
there was something deeper that they had missed.
Yes, it's simple; that was the point. The most complex routine I
assumed is integer addition, and it's not really hard. I'll
present an example below.
"Adding"
integers and "shifting" binary integers is welldefined
terminology.
Yes, but it's the *representation* of those integers that's been the
problem throughout.
Right. To solve that problem, I give the highlevel algorithm and
deal with the representation in the shift and add procedures.
I already posted the threeline algorithm. It
appeared immediately under the phrase "To turn BCD x to binary
integer y," and that is what it is intended to achieve.
Oh, that "algorithm". The good ol' num = num * base + digit is an
"algorithm"???
You lost me. The algorithm I presented didn't use a multiply
operator. It could have, and of course it would still be an
algorithm.
The problem with that is that the OP has always maintained that he has
no facility for handling a binary integer ("num") longer than 16 bits
 no 32bit long, no bignum package that didn't need "long", ...
No problem. Here's an example of an add procedure he might use in
C. It adds modestlylarge integers, as base256 bigendian
sequences of bytes. It doesn't need an int any larger than 8 bits.
Untested:
typedef unsigned char uint8;
#define SIZEOF_BIGINT 16
uint8 add(uint8* result, const uint8* a, const uint8* b)
/* Set result to a+b, returning carry out of MSB. */
{
uint8 carry = 0;
unsigned int i = SIZEOF_BIGINT;
while (i 0) {
i;
result[i] = (a[i] + b[i] + carry) & 0xFF;
carry = carry ? result[i] <= a[i] : result[i] < a[i];
}
return carry;
}
Where I come from, a "normal binary integer" is base 2. It can be
broken up into chunks of any size greater than 1 bit, but practically
according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
when is base 256 "normal" and in what sense of normal?
All the popular CPU's address storage in byte. In C all variable
sizes are in units of char/unsigned char, and unsigned char must
hold zero through 255.
The OP maintained the line that he has no facility for handling a
base256 number longer than 2 base256 digits.
So he'll have to build what's needed. That's why I showed the
problem broken down to shifts and adds; they're easy to build.
The dialogue between Dennis and the OP wasn't the epitome of clarity:
Well, I found Dennis clear.
[...]
I was merely wondering whether you did in fact
have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
without assembling the number in some much larger base b3 (e.g. 256).
I'm not sure what that means.

Bryan
ohn Machin wrote:
bryanjugglercryptograp...@yahoo.com wrote:
"For each nibble n of x" means to take each 4 bit piece of the BCD
integer as a value from zero to sixteen (though only 0 through 9
will appear), from most significant to least significant.
The OP's input, unvaryingly through the whole thread, even surviving to
his Javacard implementation of add() etc, is a list/array of decimal
digits (0 <= value <= 9). Extracting a nibble is so simple that
mentioning a "subroutine" might make the gentle reader wonder whether
there was something deeper that they had missed.
Yes, it's simple; that was the point. The most complex routine I
assumed is integer addition, and it's not really hard. I'll
present an example below.
"Adding"
integers and "shifting" binary integers is welldefined
terminology.
Yes, but it's the *representation* of those integers that's been the
problem throughout.
Right. To solve that problem, I give the highlevel algorithm and
deal with the representation in the shift and add procedures.
I already posted the threeline algorithm. It
appeared immediately under the phrase "To turn BCD x to binary
integer y," and that is what it is intended to achieve.
Oh, that "algorithm". The good ol' num = num * base + digit is an
"algorithm"???
You lost me. The algorithm I presented didn't use a multiply
operator. It could have, and of course it would still be an
algorithm.
The problem with that is that the OP has always maintained that he has
no facility for handling a binary integer ("num") longer than 16 bits
 no 32bit long, no bignum package that didn't need "long", ...
No problem. Here's an example of an add procedure he might use in
C. It adds modestlylarge integers, as base256 bigendian
sequences of bytes. It doesn't need an int any larger than 8 bits.
Untested:
typedef unsigned char uint8;
#define SIZEOF_BIGINT 16
uint8 add(uint8* result, const uint8* a, const uint8* b)
/* Set result to a+b, returning carry out of MSB. */
{
uint8 carry = 0;
unsigned int i = SIZEOF_BIGINT;
while (i 0) {
i;
result[i] = (a[i] + b[i] + carry) & 0xFF;
carry = carry ? result[i] <= a[i] : result[i] < a[i];
}
return carry;
}
Where I come from, a "normal binary integer" is base 2. It can be
broken up into chunks of any size greater than 1 bit, but practically
according to the wordsize of the CPU: 8, 16, 32, 64, ... bits. Since
when is base 256 "normal" and in what sense of normal?
All the popular CPU's address storage in byte. In C all variable
sizes are in units of char/unsigned char, and unsigned char must
hold zero through 255.
The OP maintained the line that he has no facility for handling a
base256 number longer than 2 base256 digits.
So he'll have to build what's needed. That's why I showed the
problem broken down to shifts and adds; they're easy to build.
The dialogue between Dennis and the OP wasn't the epitome of clarity:
Well, I found Dennis clear.
[...]
I was merely wondering whether you did in fact
have a method of converting from base b1 (e.g. 10) to base b2 (e.g. 16)
without assembling the number in some much larger base b3 (e.g. 256).
I'm not sure what that means.

Bryan This discussion thread is closed Replies have been disabled for this discussion. Similar topics
6 posts
views
Thread by massimo 
last post: by

10 posts
views
Thread by Kent 
last post: by

24 posts
views
Thread by Robin Cole 
last post: by

4 posts
views
Thread by JS 
last post: by

3 posts
views
Thread by chellappa 
last post: by

reply
views
Thread by drewy2k12 
last post: by
  
12 posts
views
Thread by kalyan 
last post: by
           